Markovian Monte Carlo program EvolFMC v.2 for solving QCD evolution equations
NASA Astrophysics Data System (ADS)
Jadach, S.; Płaczek, W.; Skrzypek, M.; Stokłosa, P.
2010-02-01
We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods. New version program summaryProgram title: EvolFMC v.2 Catalogue identifier: AEFN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code) No. of bytes in distributed program, including test data, etc.: 412 752 Distribution format: tar.gz Programming language: C++ Computer: PC, Mac Operating system: Linux, Mac OS X RAM: Less than 256 MB Classification: 11.5 External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations. Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons. Restrictions:Limited to the case of massless partons. Implemented in the LO and NLO approximations only. Weighted events only. Unusual features: Modified-DGLAP evolutions included up to the NLO level. Additional comments: Technical precision established at 5×10. Running time: For the 10 6 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10.5.5, 2.4 GHz Intel Core 2 Duo, gcc 4.2.4, single thread).
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
2006-01-01
Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
NASA Technical Reports Server (NTRS)
Pinckney, John
2010-01-01
With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
ERIC Educational Resources Information Center
Kalkanis, G.; Sarris, M. M.
1999-01-01
Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…
Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.
Berry, K J; Mielke, P W
2000-12-01
Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J.
1983-01-01
The atmospheric radiative transfer calculation program (ATARD) and its supporting programs (setting up atmospheric profile, making Mie tables and an exponential-sum-fitting table) were completed. More sophisticated treatment of aerosol scattering (including angular phase function or asymmetric factor) and multichannel analysis of results from ATRAD are being developed. Some progress was made on a Monte Carlo program for examining two dimensional effects, specifically a surface boundary condition that varies across a scene. The MONTE program combines ATRAD and the Monte Carlo method together to produce an atmospheric point spread function. Currently the procedure passes monochromatic tests and the results are reasonable.
SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T. III
SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Monte Carlo simulation of biomolecular systems with BIOMCSIM
NASA Astrophysics Data System (ADS)
Kamberaj, H.; Helms, V.
2001-12-01
A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.
NASA Astrophysics Data System (ADS)
Golonka, P.; Pierzchała, T.; Waş, Z.
2004-02-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events. Program summaryTitle of the program:MC-TESTER, version 1.1 Catalogue identifier: ADSM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAM Operating system: Linux Red Hat 6.1, 7.2, and also 8.0 Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77 Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB). No. of bytes in distributed program, including test data, etc.: 2 024 425 Distribution format: tar gzip file Additional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet). Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparison Nature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties. Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Restrictions on the complexity of the problem: For a list of limitations see Section 6. Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; ? processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines. Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.
Using Stan for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Au, Chi Hang
2018-01-01
Stan is a flexible probabilistic programming language providing full Bayesian inference through Hamiltonian Monte Carlo algorithms. The benefits of Hamiltonian Monte Carlo include improved efficiency and faster inference, when compared to other MCMC software implementations. Users can interface with Stan through a variety of computing…
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improvesmore » usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.« less
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
The Use of Monte Carlo Techniques to Teach Probability.
ERIC Educational Resources Information Center
Newell, G. J.; MacFarlane, J. D.
1985-01-01
Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…
Monte Carlo-based searching as a tool to study carbohydrate structure
USDA-ARS?s Scientific Manuscript database
A torsion angle-based Monte-Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various functions for evaluat...
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
PEPSI — a Monte Carlo generator for polarized leptoproduction
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Toncheva, Greta; Yoshizumi, Terry T.; Frush, Donald P.
2011-01-01
Purpose: Radiation-dose awareness and optimization in CT can greatly benefit from a dose-reporting system that provides dose and risk estimates specific to each patient and each CT examination. As the first step toward patient-specific dose and risk estimation, this article aimed to develop a method for accurately assessing radiation dose from CT examinations. Methods: A Monte Carlo program was developed to model a CT system (LightSpeed VCT, GE Healthcare). The geometry of the system, the energy spectra of the x-ray source, the three-dimensional geometry of the bowtie filters, and the trajectories of source motions during axial and helical scans were explicitly modeled. To validate the accuracy of the program, a cylindrical phantom was built to enable dose measurements at seven different radial distances from its central axis. Simulated radial dose distributions in the cylindrical phantom were validated against ion chamber measurements for single axial scans at all combinations of tube potential and bowtie filter settings. The accuracy of the program was further validated using two anthropomorphic phantoms (a pediatric one-year-old phantom and an adult female phantom). Computer models of the two phantoms were created based on their CT data and were voxelized for input into the Monte Carlo program. Simulated dose at various organ locations was compared against measurements made with thermoluminescent dosimetry chips for both single axial and helical scans. Results: For the cylindrical phantom, simulations differed from measurements by −4.8% to 2.2%. For the two anthropomorphic phantoms, the discrepancies between simulations and measurements ranged between (−8.1%, 8.1%) and (−17.2%, 13.0%) for the single axial scans and the helical scans, respectively. Conclusions: The authors developed an accurate Monte Carlo program for assessing radiation dose from CT examinations. When combined with computer models of actual patients, the program can provide accurate dose estimates for specific patients. PMID:21361208
Planning for connections in the long-term in Patagonia
Amy T. Austin
2009-01-01
Establishing a long-term ecological research program and research collaborations in northwestern Patagonia. A workshop in San Carlos de Bariloche, Argentina, January 2009. The relict flora of Gondwanda, the mystic nature of the windswept Patagonian steppe, the Andes mountains and the southern beech forests, all combined, made San Carlos de Bariloche the perfect setting...
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Recent advances and future prospects for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
An update on the BQCD Hybrid Monte Carlo program
NASA Astrophysics Data System (ADS)
Haar, Taylor Ryan; Nakamura, Yoshifumi; Stüben, Hinnerk
2018-03-01
We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n) for K, cSW and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.
San Carlos Apache Tribe - Energy Organizational Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. An intern program. Staff training. Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribalmore » energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.« less
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
APS undulator and wiggler sources: Monte-Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, S.L.; Lai, B.; Viccaro, P.J.
1992-02-01
Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning
2018-05-16
QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.
Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Ritsch, E.; Atlas Collaboration
2014-06-01
The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Billman, C.; Harding, A. K.
2013-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
jTracker and Monte Carlo Comparison
NASA Astrophysics Data System (ADS)
Selensky, Lauren; SeaQuest/E906 Collaboration
2015-10-01
SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
McStas 1.1: a tool for building neutron Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.
2000-03-01
McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The GIBS software program is a Grand Canonical Monte Carlo (GCMC) simulation program (written in C++) that can be used for 1) computing the excess chemical potential of ions and the mean activity coefficients of salts in homogeneous electrolyte solutions; and, 2) for computing the distribution of ions around fixed macromolecules such as, nucleic acids and proteins. The solvent can be represented as neutral hard spheres or as a dielectric continuum. The ions are represented as charged hard spheres that can interact via Coulomb, hard-sphere, or Lennard-Jones potentials. In addition to hard-sphere repulsions, the ions can also be made tomore » interact with the solvent hard spheres via short-ranged attractive square-well potentials.« less
Fast quantum Monte Carlo on a GPU
NASA Astrophysics Data System (ADS)
Lutsyshyn, Y.
2015-02-01
We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.
Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School
NASA Astrophysics Data System (ADS)
Lafage, Vincent
2017-11-01
Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
NASA Astrophysics Data System (ADS)
Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.
2017-02-01
Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.
A Comparison of Electrolytic Capacitors and Supercapacitors for Piezo-Based Energy Harvesting
2013-07-01
A Comparison of Electrolytic Capacitors and Supercapacitors for Piezo-Based Energy Harvesting by Matthew H. Ervin, Carlos M. Pereira, John R...Capacitors and Supercapacitors for Piezo-Based Energy Harvesting Matthew H. Ervin Sensors and Electronic Devices Directorate, ARL Carlos M. Pereira... Supercapacitors for Piezo-Based Energy Harvesting 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Matthew H
LCG MCDB—a knowledgebase of Monte-Carlo simulated events
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.
2008-02-01
In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.
Trade Space Analysis: Rotational Analyst Research Project
2015-09-01
POM Program Objective Memoranda PM Program Manager RFP Request for Proposal ROM Rough Order Magnitude RSM Response Surface Method RSE ...response surface method (RSM) / response surface equations ( RSEs ) as surrogate models. It uses the RSEs with Monte Carlo simulation to quantitatively
PCDAQ, A Windows Based DAQ System
NASA Astrophysics Data System (ADS)
Hogan, Gary
1998-10-01
PCDAQ is a Windows NT based general DAQ/Analysis/Monte Carlo shell developed as part of the Proton Radiography project at LANL (Los Alamos National Laboratory). It has been adopted by experiments outside of the Proton Radiography project at Brookhaven National Laboratory (BNL) and at LANL. The program provides DAQ, Monte Carlo, and replay (disk file input) modes. Data can be read from hardware (CAMAC) or other programs (ActiveX servers). Future versions will read VME. User supplied data analysis routines can be written in Fortran, C++, or Visual Basic. Histogramming, testing, and plotting packages are provided. Histogram data can be exported to spreadsheets or analyzed in user supplied programs. Plots can be copied and pasted as bitmap objects into other Windows programs or printed. A text database keyed by the run number is provided. Extensive software control flags are provided so that the user can control the flow of data through the program. Control flags can be set either in script command files or interactively. The program can be remotely controlled and data accessed over the Internet through its ActiveX DCOM interface.
NASA Technical Reports Server (NTRS)
1982-01-01
A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
Monte Carlo tests of the ELIPGRID-PC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Sherbini, S; Tamasanis, D; Sykes, J; Porter, S W
1986-12-01
A program was developed to calculate the exposure rate resulting from airborne gases inside a reactor containment building. The calculations were performed at the location of a wall-mounted area radiation monitor. The program uses Monte Carlo techniques and accounts for both the direct and scattered components of the radiation field at the detector. The scattered component was found to contribute about 30% of the total exposure rate at 50 keV and dropped to about 7% at 2000 keV. The results of the calculations were normalized to unit activity per unit volume of air in the containment. This allows the exposure rate readings of the area monitor to be used to estimate the airborne activity in containment in the early phases of an accident. Such estimates, coupled with containment leak rates, provide a method to obtain a release rate for use in offsite dose projection calculations.
Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)
NASA Technical Reports Server (NTRS)
Warman, E. A.; Lindsey, B. A.
1972-01-01
The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.
DOT National Transportation Integrated Search
2001-02-01
A new version of the CRCP computer program, CRCP-9, has been developed in this study. The numerical model of the CRC pavements was developed using finite element theories, the crack spacing prediction model was developed using the Monte Carlo method,...
A Monte Carlo Program for Simulating Selection Decisions from Personnel Tests
ERIC Educational Resources Information Center
Petersen, Calvin R.; Thain, John W.
1976-01-01
Relative to test and criterion parameters and cutting scores, the correlation coefficient, sample size, and number of samples to be drawn (all inputs), this program calculates decision classification rates across samples and for combined samples. Several other related indices are also computed. (Author)
NASA Astrophysics Data System (ADS)
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Calculation of self–shielding factor for neutron activation experiments using GEANT4 and MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero–Barrientos, Jaime, E-mail: jaromero@ing.uchile.cl; Universidad de Chile, DFI, Facultad de Ciencias Físicas Y Matemáticas, Avenida Blanco Encalada 2008, Santiago; Molina, F.
2016-07-07
The neutron self–shielding factor G as a function of the neutron energy was obtained for 14 pure metallic samples in 1000 isolethargic energy bins from 1·10{sup −5}eV to 2·10{sup 7}eV using Monte Carlo simulations in GEANT4 and MCNP6. The comparison of these two Monte Carlo codes shows small differences in the final self–shielding factor mostly due to the different cross section databases that each program uses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.
Basurto-Dávila, Ricardo; Meltzer, Martin I; Mills, Dora A; Beeler Asay, Garrett R; Cho, Bo-Hyun; Graitcer, Samuel B; Dube, Nancy L; Thompson, Mark G; Patel, Suchita A; Peasah, Samuel K; Ferdinands, Jill M; Gargiullo, Paul; Messonnier, Mark; Shay, David K
2017-12-01
To estimate the societal economic and health impacts of Maine's school-based influenza vaccination (SIV) program during the 2009 A(H1N1) influenza pandemic. Primary and secondary data covering the 2008-09 and 2009-10 influenza seasons. We estimated weekly monovalent influenza vaccine uptake in Maine and 15 other states, using difference-in-difference-in-differences analysis to assess the program's impact on immunization among six age groups. We also developed a health and economic Markov microsimulation model and conducted Monte Carlo sensitivity analysis. We used national survey data to estimate the impact of the SIV program on vaccine coverage. We used primary data and published studies to develop the microsimulation model. The program was associated with higher immunization among children and lower immunization among adults aged 18-49 years and 65 and older. The program prevented 4,600 influenza infections and generated $4.9 million in net economic benefits. Cost savings from lower adult vaccination accounted for 54 percent of the economic gain. Economic benefits were positive in 98 percent of Monte Carlo simulations. SIV may be a cost-beneficial approach to increase immunization during pandemics, but programs should be designed to prevent lower immunization among nontargeted groups. © Health Research and Educational Trust.
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
NASA Astrophysics Data System (ADS)
Serrano, Agostinho; Santos, Flávia M. T.; Greca, Ileana M.
2004-09-01
It is shown how basic aspects of ionic solvation structure, a fundamental topic for understanding different concepts and levels of representations of chemical structure and transformation, can be taught with the help of a Monte Carlo simulation package for molecular liquids. By performing a pair distribution function analysis of the solvation of Na + , Cl , and Ar in water, it is shown that it is feasible to explain the differences in solvation for these differently charged solutes. Visual representations of the solvated ions can also be employed to help the teaching activity. This may serve as an introduction to the study of solvation structure in chemistry undergraduate courses. The advantages of using tested, up-to-date scientific simulation programs as the fundamental bricks in the construction of virtual laboratories is also discussed.
MCMC multilocus lod scores: application of a new approach.
George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A
2005-01-01
On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).
ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.
ERIC Educational Resources Information Center
Vale, C. David; Gialluca, Kathleen A.
ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
Bahreyni Toossi, M T; Moradi, H; Zare, H
2008-01-01
In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.
NASA Astrophysics Data System (ADS)
Davidson, N.; Golonka, P.; Przedziński, T.; Waş, Z.
2011-03-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Since 2002 new functionalities were introduced into the package. In particular, it now works with the HepMC event record, the standard for C++ programs. The complete set-up for benchmarking the interfaces, such as interface between τ-lepton production and decay, including QED bremsstrahlung effects is shown. The example is chosen to illustrate the new options introduced into the program. From the technical perspective, our paper documents software updates and supplements previous documentation. As in the past, our test consists of two steps. Distinct Monte Carlo programs are run separately; events with decays of a chosen particle are searched, and information is stored by MC-TESTER. Then, at the analysis step, information from a pair of runs may be compared and represented in the form of tables and plots. Updates introduced in the program up to version 1.24.4 are also documented. In particular, new configuration scripts or script to combine results from multitude of runs into single information file to be used in analysis step are explained. Program summaryProgram title: MC-TESTER, version 1.23 and version 1.24.4 Catalog identifier: ADSM_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 548 No. of bytes in distributed program, including test data, etc.: 4 290 610 Distribution format: tar.gz Programming language: C++, FORTRAN77 Tested and compiled with: gcc 3.4.6, 4.2.4 and 4.3.2 with g77/gfortran Computer: Tested on various platforms Operating system: Tested on operating systems: Linux SLC 4.6 and SLC 5, Fedora 8, Ubuntu 8.2 etc. Classification: 11.9 External routines: HepMC ( https://savannah.cern.ch/projects/hepmc/), PYTHIA8 ( http://home.thep.lu.se/~torbjorn/Pythia.html), LaTeX ( http://www.latex-project.org/) Catalog identifier of previous version: ADSM_v1_0 Journal reference of previous version: Comput. Phys. Comm. 157 (2004) 39 Does the new version supersede the previous version?: Yes Nature of problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable for the development of new programs, in order to check correctness of the installations or for discussion of uncertainties. Solution method: A typical HEP Monte Carlo program stores the generated events in event records such as HepMC, HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Reasons for new version: Interface for HepMC Event Record is introduced. Setup for benchmarking the interfaces, such as τ-lepton production and decay, including QED bremsstrahlung effects is introduced as well. This required significant changes in the algorithm. As a consequence, a new version of the code was introduced. Restrictions: Only the first 200 decay channels that were found will initialize histograms and if the multiplicity of decay products in a given channel was larger than 7, histograms will not be created for that channel. Additional comments: New features: HepMC interface, use of lists in definition of histograms and decay channels, filters for decay products or secondary decays to be omitted, bug fixing, extended flexibility in representation of program output, installation configuration scripts, merging multiple output files from separate generations. Running time: Varies substantially with the analyzed decay particle, but generally speed estimation of the old version remains valid. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; LATEX processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multiprocessor machines.
NASA Technical Reports Server (NTRS)
1976-01-01
The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.
Monte Carlo technique for very large ising models
NASA Astrophysics Data System (ADS)
Kalle, C.; Winkelmann, V.
1982-08-01
Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
2010-01-01
respectively. Conformations for all three systems were generated by exhaustive Monte Carlo searching. Relative conformational energies were calculated at the...routines of the Maestro(v. 6.5)/ Macromodel-Batchmin(8.6)21 suite of programs. The number of Monte Carlo steps for the searches was 500 000. Energy ...set using the B3LYP30,31 hybrid density functional. Single-point energies at the MP2/ aug-cc-pVDZ and MP2/aug-cc-pVTZ levels of theory were obtained
A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14
NASA Astrophysics Data System (ADS)
Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.
2000-03-01
Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.
The QUELCE Method: Using Change Drivers to Estimate Program Costs
2016-08-01
QUELCE computes a distribution of program costs based on Monte Carlo analysis of program cost drivers—assessed via analyses of dependency structure...possible scenarios. These include a dependency structure matrix to understand the interaction of change drivers for a specific project a...performed by the SEI or by company analysts. From the workshop results, analysts create a dependency structure matrix (DSM) of the change drivers
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2014-03-01
We present sample CUDA programs for the GPU computing of the Swendsen-Wang multi-cluster spin flip algorithm. We deal with the classical spin models; the Ising model, the q-state Potts model, and the classical XY model. As for the lattice, both the 2D (square) lattice and the 3D (simple cubic) lattice are treated. We already reported the idea of the GPU implementation for 2D models (Komura and Okabe, 2012). We here explain the details of sample programs, and discuss the performance of the present GPU implementation for the 3D Ising and XY models. We also show the calculated results of the moment ratio for these models, and discuss phase transitions. Catalogue identifier: AERM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5632 No. of bytes in distributed program, including test data, etc.: 14688 Distribution format: tar.gz Programming language: C, CUDA. Computer: System with an NVIDIA CUDA enabled GPU. Operating system: System with an NVIDIA CUDA enabled GPU. Classification: 23. External routines: NVIDIA CUDA Toolkit 3.0 or newer Nature of problem: Monte Carlo simulation of classical spin systems. Ising, q-state Potts model, and the classical XY model are treated for both two-dimensional and three-dimensional lattices. Solution method: GPU-based Swendsen-Wang multi-cluster spin flip Monte Carlo method. The CUDA implementation for the cluster-labeling is based on the work by Hawick et al. [1] and that by Kalentev et al. [2]. Restrictions: The system size is limited depending on the memory of a GPU. Running time: For the parameters used in the sample programs, it takes about a minute for each program. Of course, it depends on the system size, the number of Monte Carlo steps, etc. References: [1] K.A. Hawick, A. Leist, and D. P. Playne, Parallel Computing 36 (2010) 655-678 [2] O. Kalentev, A. Rai, S. Kemnitzb, and R. Schneider, J. Parallel Distrib. Comput. 71 (2011) 615-620
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates
NASA Astrophysics Data System (ADS)
Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin
2014-03-01
The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J. Jr.
1977-12-01
Four H/sub 2/O-moderated, slightly-enriched-uranium critical experiments were analyzed by Monte Carlo methods with ENDF/B-IV data. These were simple metal-rod lattices comprising Cross Section Evaluation Working Group thermal reactor benchmarks TRX-1 through TRX-4. Generally good agreement with experiment was obtained for calculated integral parameters: the epi-thermal/thermal ratio of U238 capture (rho/sup 28/) and of U235 fission (delta/sup 25/), the ratio of U238 capture to U235 fission (CR*), and the ratio of U238 fission to U235 fission (delta/sup 28/). Full-core Monte Carlo calculations for two lattices showed good agreement with cell Monte Carlo-plus-multigroup P/sub l/ leakage corrections. Newly measured parameters for themore » low energy resonances of U238 significantly improved rho/sup 28/. In comparison with other CSEWG analyses, the strong correlation between K/sub eff/ and rho/sup 28/ suggests that U238 resonance capture is the major problem encountered in analyzing these lattices.« less
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.
Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G
2012-11-05
Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.
MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling
2012-01-01
Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base. PMID:23126469
NASA Astrophysics Data System (ADS)
Sexton, James C.
1990-08-01
The GF11 project at IBM's T. J. Watson Research Center is entering full production for QCD numerical calculations. This paper describes the GF11 hardware and system software, and discusses the first production program which has been developed to run on GF11. This program is a variation of the Cabbibo Marinari pure gauge Monte Carlo program for SU(3) and is currently sustaining almost 6 gigaflops on 360 processors in GF11.
Modeling of thin-film GaAs growth
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.
1981-01-01
Efforts to produce a Monte Carlo computer program for the analysis of crystal growth are briefly discussed. A literature survey was conducted of articles relating to the subject. A list of references reviewed is presented.
In silico FRET from simulated dye dynamics
NASA Astrophysics Data System (ADS)
Hoefling, Martin; Grubmüller, Helmut
2013-03-01
Single molecule fluorescence resonance energy transfer (smFRET) experiments probe molecular distances on the nanometer scale. In such experiments, distances are recorded from FRET transfer efficiencies via the Förster formula, E=1/(1+(). The energy transfer however also depends on the mutual orientation of the two dyes used as distance reporter. Since this information is typically inaccessible in FRET experiments, one has to rely on approximations, which reduce the accuracy of these distance measurements. A common approximation is an isotropic and uncorrelated dye orientation distribution. To assess the impact of such approximations, we present the algorithms and implementation of a computational toolkit for the simulation of smFRET on the basis of molecular dynamics (MD) trajectory ensembles. In this study, the dye orientation dynamics, which are used to determine dynamic FRET efficiencies, are extracted from MD simulations. In a subsequent step, photons and bursts are generated using a Monte Carlo algorithm. The application of the developed toolkit on a poly-proline system demonstrated good agreement between smFRET simulations and experimental results and therefore confirms our computational method. Furthermore, it enabled the identification of the structural basis of measured heterogeneity. The presented computational toolkit is written in Python, available as open-source, applicable to arbitrary systems and can easily be extended and adapted to further problems. Catalogue identifier: AENV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv3, the bundled SIMD friendly Mersenne twister implementation [1] is provided under the SFMT-License. No. of lines in distributed program, including test data, etc.: 317880 No. of bytes in distributed program, including test data, etc.: 54774217 Distribution format: tar.gz Programming language: Python, Cython, C (ANSI C99). Computer: Any (see memory requirements). Operating system: Any OS with CPython distribution (e.g. Linux, MacOSX, Windows). Has the code been vectorised or parallelized?: Yes, in Ref. [2], 4 CPU cores were used. RAM: About 700MB per process for the simulation setup in Ref. [2]. Classification: 16.1, 16.7, 23. External routines: Calculation of Rκ2-trajectories from GROMACS [3] MD trajectories requires the GromPy Python module described in Ref. [4] or a GROMACS 4.6 installation. The md2fret program uses a standard Python interpreter (CPython) v2.6+ and < v3.0 as well as the NumPy module. The analysis examples require the Matplotlib Python module. Nature of problem: Simulation and interpretation of single molecule FRET experiments. Solution method: Combination of force-field based molecular dynamics (MD) simulating the dye dynamics and Monte Carlo sampling to obtain photon statistics of FRET kinetics. Additional comments: !!!!! The distribution file for this program is over 50 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: A single run in Ref. [2] takes about 10 min on a Quad Core Intel Xeon CPU W3520 2.67GHz with 6GB physical RAM References: [1] M. Saito, M. Matsumoto, SIMD-oriented fast Mersenne twister: a 128-bit pseudorandom number generator, in: A. Keller, S. Heinrich, H. Niederreiter (Eds.), Monte Carlo and Quasi-Monte Carlo Methods 2006, Springer; Berlin, Heidelberg, 2008, pp. 607-622. [2] M. Hoefling, N. Lima, D. Hänni, B. Schuler, C. A. M. Seidel, H. Grubmüller, Structural heterogeneity and quantitative FRET efficiency distributions of polyprolines through a hybrid atomistic simulation and Monte Carlo approach, PLoS ONE 6 (5) (2011) e19791. [3] D. V. D. Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark, H. J. C. Berendsen, GROMACS: fast, flexible, and free., J Comput Chem 26 (16) (2005) 1701-1718. [4] R. Pool, A. Feenstra, M. Hoefling, R. Schulz, J. C. Smith, J. Heringa, Enabling grand-canonical Monte Carlo: Extending the flexibility of gromacs through the GromPy Python interface module, Journal of Chemical Theory and Computation 33 (12) (2012) 1207-1214.
ERIC Educational Resources Information Center
Heifetz, Louis J.
1998-01-01
Comments on "The Small ICF/MR Program: Dimensions of Quality and Cost" (Conroy), that found small Intermediate Care Facilities (ICF) for individuals with mental retardation are inferior to other community programs. Acknowledges that while some research problems exist, no important evidence against the findings has been provided. (CR)
Mastering the game of Go with deep neural networks and tree search
NASA Astrophysics Data System (ADS)
Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis
2016-01-01
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Mastering the game of Go with deep neural networks and tree search.
Silver, David; Huang, Aja; Maddison, Chris J; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis
2016-01-28
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
NASA Technical Reports Server (NTRS)
Horton, B. E.; Bowhill, S. A.
1971-01-01
This report describes a Monte Carlo simulation of transition flow around a sphere. Conditions for the simulation correspond to neutral monatomic molecules at two altitudes (70 and 75 km) in the D region of the ionosphere. Results are presented in the form of density contours, velocity vector plots and density, velocity and temperature profiles for the two altitudes. Contours and density profiles are related to independent Monte Carlo and experimental studies, and drag coefficients are calculated and compared with available experimental data. The small computer used is a PDP-15 with 16 K of core, and a typical run for 75 km requires five iterations, each taking five hours. The results are recorded on DECTAPE to be printed when required, and the program provides error estimates for any flow field parameter.
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Monte Carlo calculations of the impact of a hip prosthesis on the dose distribution
NASA Astrophysics Data System (ADS)
Buffard, Edwige; Gschwind, Régine; Makovicka, Libor; David, Céline
2006-09-01
Because of the ageing of the population, an increasing number of patients with hip prostheses are undergoing pelvic irradiation. Treatment planning systems (TPS) currently available are not always able to accurately predict the dose distribution around such implants. In fact, only Monte Carlo simulation has the ability to precisely calculate the impact of a hip prosthesis during radiotherapeutic treatment. Monte Carlo phantoms were developed to evaluate the dose perturbations during pelvic irradiation. A first model, constructed with the DOSXYZnrc usercode, was elaborated to determine the dose increase at the tissue-metal interface as well as the impact of the material coating the prosthesis. Next, CT-based phantoms were prepared, using the usercode CTCreate, to estimate the influence of the geometry and the composition of such implants on the beam attenuation. Thanks to a program that we developed, the study was carried out with CT-based phantoms containing a hip prosthesis without metal artefacts. Therefore, anthropomorphic phantoms allowed better definition of both patient anatomy and the hip prosthesis in order to better reproduce the clinical conditions of pelvic irradiation. The Monte Carlo results revealed the impact of certain coatings such as PMMA on dose enhancement at the tissue-metal interface. Monte Carlo calculations in CT-based phantoms highlighted the marked influence of the implant's composition, its geometry as well as its position within the beam on dose distribution.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
NASA Astrophysics Data System (ADS)
España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M
2009-03-01
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less
Statistical Evaluation of Time Series Analysis Techniques
NASA Technical Reports Server (NTRS)
Benignus, V. A.
1973-01-01
The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.
Probabilistic analysis algorithm for UA slope software program.
DOT National Transportation Integrated Search
2013-12-01
A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...
Risk/benefit assessment of delayed action concept for rail inspection
DOT National Transportation Integrated Search
1999-02-01
A Monte Carlo simulation of certain aspects of rail inspection is presented. The simulation is used to investigate alternative practices in railroad rail inspection programs. Results are presented to compare the present practice of immediately repair...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel
2011-03-15
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult malemore » and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different CT scan ranges and technical parameters. Organ doses from existing commercial programs do not reasonably match organ doses calculated for the hybrid phantoms due to differences in phantom anatomy, as well as differences in organ dose scaling parameters. The organ dose matrices developed in this study will be extended to cover different technical parameters, CT scanner models, and various age groups.« less
Stochastic Analysis of Orbital Lifetimes of Spacecraft
NASA Technical Reports Server (NTRS)
Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David
2008-01-01
A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, Andreu; Badano, Aldo
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less
Organization and use of a Software/Hardware Avionics Research Program (SHARP)
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.; Kareemi, M. N.
1975-01-01
The organization and use is described of the software/hardware avionics research program (SHARP) developed to duplicate the automatic portion of the STOLAND simulator system, on a general-purpose computer system (i.e., IBM 360). The program's uses are: (1) to conduct comparative evaluation studies of current and proposed airborne and ground system concepts via single run or Monte Carlo simulation techniques, and (2) to provide a software tool for efficient algorithm evaluation and development for the STOLAND avionics computer.
2014-10-01
the angles and dihedrals that are truly unique will be indicated by the user by editing NewAngleTypesDump and NewDihedralTypesDump. The program ...Atomistic Molecular Simulations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Robert M Elder, Timothy W Sirk, and...Antechamber program in Assisted Model Building with Energy Refinement (AMBER) Tools to assign partial charges (using the Austin Model 1 [AM1]-bond charge
Error analysis of Dobson spectrophotometer measurements of the total ozone content
NASA Technical Reports Server (NTRS)
Holland, A. C.; Thomas, R. W. L.
1975-01-01
A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.
NASA Technical Reports Server (NTRS)
Wiscombe, W.
1999-01-01
The purpose of this paper is discuss the concept of fractal dimension; multifractal statistics as an extension of this; the use of simple multifractal statistics (power spectrum, structure function) to characterize cloud liquid water data; and to understand the use of multifractal cloud liquid water models based on real data as input to Monte Carlo radiation models of shortwave radiation transfer in 3D clouds, and the consequences of this in two areas: the design of aircraft field programs to measure cloud absorptance; and the explanation of the famous "Landsat scale break" in measured radiance.
Monte Carlo simulations of neutron-scattering instruments using McStas
NASA Astrophysics Data System (ADS)
Nielsen, K.; Lefmann, K.
2000-06-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marous, L; Muryn, J; Liptak, C
2016-06-15
Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less
Risk Assessment Techniques. A Handbook for Program Management Personnel
1983-07-01
tion; not directly usable without further development. 37. Lieber, R.S., "New Approaches for Quantifying Risk and Determining Sharing Arrangements...must be provided. Prediction intervals around cost estimating relationships (CERs) or Monte Carlo simulations will be used as proper in quantifying ... risk ." [emphasis supplied] Para 9.d. "The ISR will address the potential risk in the program office estimate by identifying ’risk’ areas and their
NASA Astrophysics Data System (ADS)
Dance, David R.; McVey, Graham; Sandborg, Michael P.; Persliden, Jan; Carlsson, Gudrun A.
1999-05-01
A Monte Carlo program has been developed to model X-ray imaging systems. It incorporates an adult voxel phantom and includes anti-scatter grid, radiographic screen and film. The program can calculate contrast and noise for a series of anatomical details. The use of measured H and D curves allows the absolute calculation of the patient entrance air kerma for a given film optical density (or vice versa). Effective dose can also be estimated. In an initial validation, the program was used to predict the optical density for exposures with plastic slabs of various thicknesses. The agreement between measurement and calculation was on average within 5%. In a second validation, a comparison was made between computer simulations and measurements for chest and lumbar spine patient radiographs. The predictions of entrance air kerma mostly fell within the range of measured values (e.g. chest PA calculated 0.15 mGy, measured 0.12 - 0.17 mGy). Good agreement was also obtained for the calculated and measured contrasts for selected anatomical details and acceptable agreement for dynamic range. It is concluded that the program provides a realistic model of the patient and imaging system. It can thus form the basis of a detailed study and optimization of X-ray imaging systems.
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice
2016-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.
NASA Astrophysics Data System (ADS)
Mortuza, Md Firoz; Lepore, Luigi; Khedkar, Kalpana; Thangam, Saravanan; Nahar, Arifatun; Jamil, Hossen Mohammad; Bandi, Laxminarayan; Alam, Md Khorshed
2018-03-01
Characterization of a 90 kCi (3330 TBq), semi-industrial, cobalt-60 gamma irradiator was performed by commissioning dosimetry and in-situ dose mapping experiments with Ceric-cerous and Fricke dosimetry systems. Commissioning dosimetry was carried out to determine dose distribution pattern of absorbed dose in the irradiation cell and products. To determine maximum and minimum absorbed dose, overdose ratio and dwell time of the tote boxes, homogeneous dummy product (rice husk) with a bulk density of 0.13 g/cm3 were used in the box positions of irradiation chamber. The regions of minimum absorbed dose of the tote boxes were observed in the lower zones of middle plane and maximum absorbed doses were found in the middle position of front plane. Moreover, as a part of dose mapping, dose rates in the wall positions and some selective strategic positions were also measured to carry out multiple irradiation program simultaneously, especially for low dose research irradiation program. In most of the cases, Monte Carlo simulation data, using Monte Carlo N-Particle eXtended code version MCNPX 2.7., were found to be in congruence with experimental values obtained from Ceric-cerous and Fricke dosimetry; however, in close proximity positions from the source, the dose rate variation between chemical dosimetry and MCNP was higher than distant positions.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Infrared Spectroscopy of Star Formation in Galactic and Extragalactic Regions
NASA Technical Reports Server (NTRS)
Smith, Howard A.; Hasan, Hashima (Technical Monitor)
2004-01-01
Last year we submitted and had accepted a paper entitled "The Far-Infrared Emission Line and Continuum Spectrum of the Seyfert Galaxy NGC 1068," by Spinoglio, L., Malkan, M., Smith. HA, Gonzalez-Alfonso, E., and Fischer, J. This analysis was based on the SWAS Monte Carlo code modeling of the OH lines in galaxies observed by ISO. Since that meeting last spring considerable effort has been put into improving the Monte Carlo code. A group of European astronomers, including Prof. Eduardo Gonzalez-Alfonso, had been performing Monte Carlo modeling of other molecules seen in ISO galaxies. We used portions of this grant to bring Prof. Gonzalez-Alfonso to Cambridge for an intensive working visit. A second major paper on the ISO IR spectroscopy of galaxies, "The Far Infrared Spectrum of Arp 220," Gonzalez-Alfonso, E., Smith. H., Fischer, J., and Cernicharo, J., is in press. Spitzer science development was the major component of this past year;s research. This program supported the development of five Early Release Objects for Spitzer observations on which Dr. Smith was Principal Investigator or Co-Investigator, and another five proposals for GO time. The early release program is designed to rapidly present to the public and the scientific community some exciting results from Spitzer in the first months of its operation. The Spitzer instrument and science teams submitted proposals for ERO objects, and a competitive selection process narrowed these down to a small group with exciting science and realistic observational parameters. This grant supported Dr. Smith's participation in the ERO process, including developing science goals, identifying key objects for observation, and developing the detailed AOR (observing formulae) to be use by the instruments for mapping, integrating, etc.). During this year Dr. Smith worked on writing up and publishing these early results. The attached bibliography includes six of Dr. Smith's articles. During this past year Dr. Smith also led or helped to develop proposals for ten Spitzer GO Programs, and three others. Appendix B lists the programs involved.
SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran
NASA Astrophysics Data System (ADS)
Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.
2008-03-01
We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.
A Hardware-Accelerated Quantum Monte Carlo framework (HAQMC) for N-body systems
NASA Astrophysics Data System (ADS)
Gothandaraman, Akila; Peterson, Gregory D.; Warren, G. Lee; Hinde, Robert J.; Harrison, Robert J.
2009-12-01
Interest in the study of structural and energetic properties of highly quantum clusters, such as inert gas clusters has motivated the development of a hardware-accelerated framework for Quantum Monte Carlo simulations. In the Quantum Monte Carlo method, the properties of a system of atoms, such as the ground-state energies, are averaged over a number of iterations. Our framework is aimed at accelerating the computations in each iteration of the QMC application by offloading the calculation of properties, namely energy and trial wave function, onto reconfigurable hardware. This gives a user the capability to run simulations for a large number of iterations, thereby reducing the statistical uncertainty in the properties, and for larger clusters. This framework is designed to run on the Cray XD1 high performance reconfigurable computing platform, which exploits the coarse-grained parallelism of the processor along with the fine-grained parallelism of the reconfigurable computing devices available in the form of field-programmable gate arrays. In this paper, we illustrate the functioning of the framework, which can be used to calculate the energies for a model cluster of helium atoms. In addition, we present the capabilities of the framework that allow the user to vary the chemical identities of the simulated atoms. Program summaryProgram title: Hardware Accelerated Quantum Monte Carlo (HAQMC) Catalogue identifier: AEEP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 691 537 No. of bytes in distributed program, including test data, etc.: 5 031 226 Distribution format: tar.gz Programming language: C/C++ for the QMC application, VHDL and Xilinx 8.1 ISE/EDK tools for FPGA design and development Computer: Cray XD1 consisting of a dual-core, dualprocessor AMD Opteron 2.2 GHz with a Xilinx Virtex-4 (V4LX160) or Xilinx Virtex-II Pro (XC2VP50) FPGA per node. We use the compute node with the Xilinx Virtex-4 FPGA Operating system: Red Hat Enterprise Linux OS Has the code been vectorised or parallelized?: Yes Classification: 6.1 Nature of problem: Quantum Monte Carlo is a practical method to solve the Schrödinger equation for large many-body systems and obtain the ground-state properties of such systems. This method involves the sampling of a number of configurations of atoms and averaging the properties of the configurations over a number of iterations. We are interested in applying the QMC method to obtain the energy and other properties of highly quantum clusters, such as inert gas clusters. Solution method: The proposed framework provides a combined hardware-software approach, in which the QMC simulation is performed on the host processor, with the computationally intensive functions such as energy and trial wave function computations mapped onto the field-programmable gate array (FPGA) logic device attached as a co-processor to the host processor. We perform the QMC simulation for a number of iterations as in the case of our original software QMC approach, to reduce the statistical uncertainty of the results. However, our proposed HAQMC framework accelerates each iteration of the simulation, by significantly reducing the time taken to calculate the ground-state properties of the configurations of atoms, thereby accelerating the overall QMC simulation. We provide a generic interpolation framework that can be extended to study a variety of pure and doped atomic clusters, irrespective of the chemical identities of the atoms. For the FPGA implementation of the properties, we use a two-region approach for accurately computing the properties over the entire domain, employ deep pipelines and fixed-point for all our calculations guaranteeing the accuracy required for our simulation.
Using Functional Languages and Declarative Programming to analyze ROOT data: LINQtoROOT
NASA Astrophysics Data System (ADS)
Watts, Gordon
2015-05-01
Modern high energy physics analysis is complex. It typically requires multiple passes over different datasets, and is often held together with a series of scripts and programs. For example, one has to first reweight the jet energy spectrum in Monte Carlo to match data before plots of any other jet related variable can be made. This requires a pass over the Monte Carlo and the Data to derive the reweighting, and then another pass over the Monte Carlo to plot the variables the analyser is really interested in. With most modern ROOT based tools this requires separate analysis loops for each pass, and script files to glue to the results of the two analysis loops together. A framework has been developed that uses the functional and declarative features of the C# language and its Language Integrated Query (LINQ) extensions to declare the analysis. The framework uses language tools to convert the analysis into C++ and runs ROOT or PROOF as a backend to get the results. This gives the analyser the full power of an object-oriented programming language to put together the analysis and at the same time the speed of C++ for the analysis loop. The tool allows one to incorporate C++ algorithms written for ROOT by others. A by-product of the design is the ability to cache results between runs, dramatically reducing the cost of adding one-more-plot and also to keep a complete record associated with each plot for data preservation reasons. The code is mature enough to have been used in ATLAS analyses. The package is open source and available on the open source site CodePlex.
NOTE: Monte Carlo simulation of correction factors for IAEA TLD holders
NASA Astrophysics Data System (ADS)
Hultqvist, Martha; Fernández-Varea, José M.; Izewska, Joanna
2010-03-01
The IAEA standard thermoluminescent dosimeter (TLD) holder has been developed for the IAEA/WHO TLD postal dose program for audits of high-energy photon beams, and it is also employed by the ESTRO-QUALity assurance network (EQUAL) and several national TLD audit networks. Factors correcting for the influence of the holder on the TL signal under reference conditions have been calculated in the present work from Monte Carlo simulations with the PENELOPE code for 60Co γ-rays and 4, 6, 10, 15, 18 and 25 MV photon beams. The simulation results are around 0.2% smaller than measured factors reported in the literature, but well within the combined standard uncertainties. The present study supports the use of the experimentally obtained holder correction factors in the determination of the absorbed dose to water from the TL readings; the factors calculated by means of Monte Carlo simulations may be adopted for the cases where there are no measured data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Simulation of the Interactions Between Gamma-Rays and Detectors Using BSIMUL
NASA Technical Reports Server (NTRS)
Haywood, S. E.; Rester, A. C., Jr.
1996-01-01
Progress made during 1995 on the Monte-Carlo gamma-ray spectrum simulation program BSIMUL is discussed. Several features have been added, including the ability to model shield that are tapered cylinders. Several simulations were made on the Near Earth Asteroid Rendezvous detector.
A Piagetian Learning Cycle for Introductory Chemical Kinetics.
ERIC Educational Resources Information Center
Batt, Russell H.
1980-01-01
Described is a Piagetian learning cycle based on Monte Carlo modeling of several simple reaction mechanisms. Included are descriptions of learning cycle phases (exploration, invention, and discovery) and four BASIC-PLUS computer programs to be used in the explanation of chemical reacting systems. (Author/DS)
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Monte Carlo simulation of electrothermal atomization on a desktop personal computer
NASA Astrophysics Data System (ADS)
Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.
1996-07-01
Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Optimization of beam shaping assembly based on D-T neutron generator and dose evaluation for BNCT
NASA Astrophysics Data System (ADS)
Naeem, Hamza; Chen, Chaobin; Zheng, Huaqing; Song, Jing
2017-04-01
The feasibility of developing an epithermal neutron beam for a boron neutron capture therapy (BNCT) facility based on a high intensity D-T fusion neutron generator (HINEG) and using the Monte Carlo code SuperMC (Super Monte Carlo simulation program for nuclear and radiation process) is proposed in this study. The Monte Carlo code SuperMC is used to determine and optimize the final configuration of the beam shaping assembly (BSA). The optimal BSA design in a cylindrical geometry which consists of a natural uranium sphere (14 cm) as a neutron multiplier, AlF3 and TiF3 as moderators (20 cm each), Cd (1 mm) as a thermal neutron filter, Bi (5 cm) as a gamma shield, and Pb as a reflector and collimator to guide neutrons towards the exit window. The epithermal neutron beam flux of the proposed model is 5.73 × 109 n/cm2s, and other dosimetric parameters for the BNCT reported by IAEA-TECDOC-1223 have been verified. The phantom dose analysis shows that the designed BSA is accurate, efficient and suitable for BNCT applications. Thus, the Monte Carlo code SuperMC is concluded to be capable of simulating the BSA and the dose calculation for BNCT, and high epithermal flux can be achieved using proposed BSA.
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Longo, Mariaconcetta; Marchioni, Chiara; Insero, Teresa; Donnarumma, Raffaella; D'Adamo, Alessandro; Lucatelli, Pierleone; Fanelli, Fabrizio; Salvatori, Filippo Maria; Cannavale, Alessandro; Di Castro, Elisabetta
2016-03-01
This study evaluates X-ray exposure in patient undergoing abdominal extra-vascular interventional procedures by means of Digital Imaging and COmmunications in Medicine (DICOM) image headers and Monte Carlo simulation. The main aim was to assess the effective and equivalent doses, under the hypothesis of their correlation with the dose area product (DAP) measured during each examination. This allows to collect dosimetric information about each patient and to evaluate associated risks without resorting to in vivo dosimetry. The dose calculation was performed in 79 procedures through the Monte Carlo simulator PCXMC (A PC-based Monte Carlo program for calculating patient doses in medical X-ray examinations), by using the real geometrical and dosimetric irradiation conditions, automatically extracted from DICOM headers. The DAP measurements were also validated by using thermoluminescent dosemeters on an anthropomorphic phantom. The expected linear correlation between effective doses and DAP was confirmed with an R(2) of 0.974. Moreover, in order to easily calculate patient doses, conversion coefficients that relate equivalent doses to measurable quantities, such as DAP, were obtained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Highlights of the SEASAT-SASS program - A review
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1983-01-01
Some important concepts of the SEASAT-SASS program are described and some of the decisions made during the program as to methods for relating wind to backscatter are discussed. The radar scatterometer design is analyzed along with the model function, which is an empirical relationship between the backscatter value and the wind speed, wind direction, and incidence angle of the radar beam with the sea surface. The results of Monte Carlo studies of mesoscale turbulence and of studies of wind stress on the sea surface involving SASS are reviewed.
LACIE performance predictor final operational capability program description, volume 3
NASA Technical Reports Server (NTRS)
1976-01-01
The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.
NASA Technical Reports Server (NTRS)
Stalnaker, Dale K.
1993-01-01
ACARA (Availability, Cost, and Resource Allocation) is a computer program which analyzes system availability, lifecycle cost (LCC), and resupply scheduling using Monte Carlo analysis to simulate component failure and replacement. This manual was written to: (1) explain how to prepare and enter input data for use in ACARA; (2) explain the user interface, menus, input screens, and input tables; (3) explain the algorithms used in the program; and (4) explain each table and chart in the output.
Cross-platform validation and analysis environment for particle physics
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.
2017-11-01
A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
Casino physics in the classroom
NASA Astrophysics Data System (ADS)
Whitney, Charles A.
1986-12-01
This article describes a seminar on the elements of probability and random processes that is computer centered and focuses on Monte Carlo simulations of processes such as coin flips, random walks on a lattice, and the behavior of photons and atoms in a gas. Representative computer programs are also described.
77 FR 42341 - Proposal Review Panel for Chemistry; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-18
... NATIONAL SCIENCE FOUNDATION Proposal Review Panel for Chemistry; Notice of Meeting In accordance... announces the following meeting: Name: ChemMatCARS Site Visit, 2011 Awardees by NSF Division of Chemistry.... Carlos Murillo, Program Director, Division of Chemistry, Room 1055, National Science Foundation, 4201...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, J; Pelletier, C; Lee, C
Purpose: Organ doses for the Hodgkin’s lymphoma patients treated with cobalt-60 radiation were estimated using an anthropomorphic model and Monte Carlo modeling. Methods: A cobalt-60 treatment unit modeled in the BEAMnrc Monte Carlo code was used to produce phase space data. The Monte Carlo simulation was verified with percent depth dose measurement in water at various field sizes. Radiation transport through the lung blocks were modeled by adjusting the weights of phase space data. We imported a precontoured adult female hybrid model and generated a treatment plan. The adjusted phase space data and the human model were imported to themore » XVMC Monte Carlo code for dose calculation. The organ mean doses were estimated and dose volume histograms were plotted. Results: The percent depth dose agreement between measurement and calculation in water phantom was within 2% for all field sizes. The mean organ doses of heart, left breast, right breast, and spleen for the selected case were 44.3, 24.1, 14.6 and 3.4 Gy, respectively with the midline prescription dose of 40.0 Gy. Conclusion: Organ doses were estimated for the patient group whose threedimensional images are not available. This development may open the door to more accurate dose reconstruction and estimates of uncertainties in secondary cancer risk for Hodgkin’s lymphoma patients. This work was partially supported by the intramural research program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1988-04-01
This document provides a discussion of the development of the FORTRAN Monte Carlo program SCINFUL (for scintillator full response), a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The program may also be used to compute angle-integrated spectra of charged particles (p, d, t, /sup 3/He, and ..cap alpha..) following neutron interactions with /sup 12/C. Extensive comparisons with a variety of experimental data are given. There is generally overall good agreement (<10% differences) of results from SCINFUL calculations with measured detector responses, i.e., N(E/sub r/) vs E/sub r/more » where E/sub r/ is the response pulse height, reproduce measured detector responses with an accuracy which, at least partly, depends upon how well the experimental configuration is known. For E/sub n/ < 16 MeV and for E/sub r/ > 15% of the maximum pulse height response, calculated spectra are within +-5% of experiment on the average. For E/sub n/ up to 50 MeV similar good agreement is obtained with experiment for E/sub r/ > 30% of maximum response. For E/sub n/ up to 75 MeV the calculated shape of the response agrees with measurements, but the calculations underpredicts the measured response by up to 30%. 65 refs., 64 figs., 3 tabs.« less
Monte Carlo calculations of k{sub Q}, the beam quality conversion factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, B. R.; Rogers, D. W. O.
2010-11-15
Purpose: To use EGSnrc Monte Carlo simulations to directly calculate beam quality conversion factors, k{sub Q}, for 32 cylindrical ionization chambers over a range of beam qualities and to quantify the effect of systematic uncertainties on Monte Carlo calculations of k{sub Q}. These factors are required to use the TG-51 or TRS-398 clinical dosimetry protocols for calibrating external radiotherapy beams. Methods: Ionization chambers are modeled either from blueprints or manufacturers' user's manuals. The dose-to-air in the chamber is calculated using the EGSnrc user-code egs{sub c}hamber using 11 different tabulated clinical photon spectra for the incident beams. The dose to amore » small volume of water is also calculated in the absence of the chamber at the midpoint of the chamber on its central axis. Using a simple equation, k{sub Q} is calculated from these quantities under the assumption that W/e is constant with energy and compared to TG-51 protocol and measured values. Results: Polynomial fits to the Monte Carlo calculated k{sub Q} factors as a function of beam quality expressed as %dd(10){sub x} and TPR{sub 10}{sup 20} are given for each ionization chamber. Differences are explained between Monte Carlo calculated values and values from the TG-51 protocol or calculated using the computer program used for TG-51 calculations. Systematic uncertainties in calculated k{sub Q} values are analyzed and amount to a maximum of one standard deviation uncertainty of 0.99% if one assumes that photon cross-section uncertainties are uncorrelated and 0.63% if they are assumed correlated. The largest components of the uncertainty are the constancy of W/e and the uncertainty in the cross-section for photons in water. Conclusions: It is now possible to calculate k{sub Q} directly using Monte Carlo simulations. Monte Carlo calculations for most ionization chambers give results which are comparable to TG-51 values. Discrepancies can be explained using individual Monte Carlo calculations of various correction factors which are more accurate than previously used values. For small ionization chambers with central electrodes composed of high-Z materials, the effect of the central electrode is much larger than that for the aluminum electrodes in Farmer chambers.« less
An Introduction to Computational Physics
NASA Astrophysics Data System (ADS)
Pang, Tao
2010-07-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
Parental GCA testing: how many crosses per parent?
G.R. Johnson
1998-01-01
The impact of increasing the number of crosses per parent (k) on the efficiency of roguing seed orchards (backwards selection, i.e., reselection of parents) was examined by using Monte Carlo simulation. Efficiencies were examined in light of advanced-generation Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) tree improvement programs where...
A Comparative Study of Exact versus Propensity Matching Techniques Using Monte Carlo Simulation
ERIC Educational Resources Information Center
Itang'ata, Mukaria J. J.
2013-01-01
Often researchers face situations where comparative studies between two or more programs are necessary to make causal inferences for informed policy decision-making. Experimental designs employing randomization provide the strongest evidence for causal inferences. However, many pragmatic and ethical challenges may preclude the use of randomized…
Program Models A Laser Beam Focused In An Aerosol Spray
NASA Technical Reports Server (NTRS)
Barton, J. P.
1996-01-01
Monte Carlo analysis performed on packets of light. Program for Analysis of Laser Beam Focused Within Aerosol Spray (FLSPRY) developed for theoretical analysis of propagation of laser pulse optically focused within aerosol spray. Applied for example, to analyze laser ignition arrangement in which focused laser pulse used to ignite liquid aerosol fuel spray. Scattering and absorption of laser light by individual aerosol droplets evaluated by use of electromagnetic Lorenz-Mie theory. Written in FORTRAN 77 for both UNIX-based computers and DEC VAX-series computers. VAX version of program (LEW-16051). UNIX version (LEW-16065).
NASA Technical Reports Server (NTRS)
Jones, W. V.
1973-01-01
Modifications to the basic computer program for performing the simulations are reported. The major changes include: (1) extension of the calculations to include the development of cascades initiated by heavy nuclei, (2) improved treatment of the nuclear disintegrations which occur during the interactions of hadrons in heavy absorbers, (3) incorporation of accurate multi-pion final-state cross sections for various interactions at accelerator energies, (4) restructuring of the program logic so that calculations can be made for sandwich-type detectors, and (5) logic modifications related to execution of the program.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Cross-platform validation and analysis environment for particle physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.
A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less
Martinovich, Viviana
2016-01-01
This text is part of a series of interviews that seek to explore diverse editing and publication experiences and the similar difficulties Latin America journals face, in order to begin to encounter contextualized solutions that articulate previously isolated efforts. In this interview, carried out in July 2015 in the Instituto de Salud Colectiva [Institute of Collective Health] of the Universidad Nacional de Lanús, Carlos Augusto Monteiro speaks to us about funding, work processes, technological innovations, and establishing teams and roles. He analyzes the importance of Latin American journals as a platform for spreading research relevant to national agendas, and the connection between journal performance, the quality of graduate training programs and research quality.
NASA Astrophysics Data System (ADS)
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A.
2017-12-01
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
NASA Astrophysics Data System (ADS)
Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.
2018-07-01
Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A
2017-12-28
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
Carlos Castillo-Chavez: a century ahead.
Schatz, James
2013-01-01
When the opportunity to contribute a short essay about Dr. Carlos Castillo-Chavez presented itself in the context of this wonderful birthday celebration my immediate reaction was por supuesto que sí! Sixteen years ago, I travelled to Cornell University with my colleague at the National Security Agency (NSA) Barbara Deuink to meet Carlos and hear about his vision to expand the talent pool of mathematicians in our country. Our motivation was very simple. First of all, the Agency relies heavily on mathematicians to carry out its mission. If the U.S. mathematics community is not healthy, NSA is not healthy. Keeping our country safe requires a team of the sharpest minds in the nation to tackle amazing intellectual challenges on a daily basis. Second, the Agency cares deeply about diversity. Within the mathematical sciences, students with advanced degrees from the Chicano, Latino, Native American, and African-American communities are underrepresented. It was clear that addressing this issue would require visionary leadership and a long-term commitment. Carlos had the vision for a program that would provide promising undergraduates from minority communities with an opportunity to gain confidence and expertise through meaningful research experiences while sharing in the excitement of mathematical and scientific discovery. His commitment to the venture was unquestionable and that commitment has not waivered since the inception of the Mathematics and Theoretical Biology Institute (MTBI) in 1996.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
[Prevalence of breastfeeding in the city of São Carlos, São Paulo
Montrone, V G; Arantes, C I
2000-01-01
OBJECTIVE: To verify the prevalence of breastfeeding in the city of São Carlos. METHOD: For the collection and treatment of data regarding prevalence of breastfeeding we used the LACMAT 3.3 program. During Vacination National Day (1998, August 15) 3,326 persons responsible for children 2 years old or younger were interviewed. RESULTS: It was verified that 52.4% of the children under a month old were on exclusive breastfeeding. Out of 532 children under 4 months old, 73.3% were being breast fed: 37.8% exclusively breast fed and 17.3% under predominant breastfeeding. It was observed that 31.7% of the children under 4 months received some other form of nourishment, such as fruit and mush, and on the fifth month, this percentage went to 62.3%. CONCLUSIONS: The results of this study demonstrated that the situation of breastfeeding in São Carlos is far from what the WHO recommends, this confirming the need to implement actions of promotion, protection, and support to breastfeeding in the health public services of the municipality.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
Implications of a quadratic stream definition in radiative transfer theory.
NASA Technical Reports Server (NTRS)
Whitney, C.
1972-01-01
An explicit definition of the radiation-stream concept is stated and applied to approximate the integro-differential equation of radiative transfer with a set of twelve coupled differential equations. Computational efficiency is enhanced by distributing the corresponding streams in three-dimensional space in a totally symmetric way. Polarization is then incorporated in this model. A computer program based on the model is briefly compared with a Monte Carlo program for simulation of horizon scans of the earth's atmosphere. It is found to be considerably faster.
Ozone measurement systems improvements studies
NASA Technical Reports Server (NTRS)
Thomas, R. W.; Guard, K.; Holland, A. C.; Spurling, J. F.
1974-01-01
Results are summarized of an initial study of techniques for measuring atmospheric ozone, carried out as the first phase of a program to improve ozone measurement techniques. The study concentrated on two measurement systems, the electro chemical cell (ECC) ozonesonde and the Dobson ozone spectrophotometer, and consisted of two tasks. The first task consisted of error modeling and system error analysis of the two measurement systems. Under the second task a Monte-Carlo model of the Dobson ozone measurement technique was developed and programmed for computer operation.
SOLPOL: A Solar Polarimeter for Hard X-Rays and Gamma-Rays
NASA Technical Reports Server (NTRS)
McConnell, Michael L.
1999-01-01
Th goal of this project was to continue the development of a hard X-ray polarimeter for studying solar flares. In earlier work (funded by a previous SR&T grant), we had already achieved several goals, including the following: 1) development of a means of producing a polarized radiation source in the lab that could be used for prototype development; 2) demonstrated the basic Compton scatter polarimeter concept using a simple laboratory setup; 3) used the laboratory results to verify our Monte Carlo simulations; and 4) investigated various detector technologies that could be incorporated into the polarimeter design. For the current one-year program, we wanted to fabricate and test a laboratory science model based on our SOLPOL (Solar Polarimeter) design. The long-term goal of this effort is to develop and test a prototype design that could be used to study flare emissions from either a balloon- or space-borne platform. The current program has achieved its goal of fabricating and testing a science model of the SOLPOL design, although additional testing of the design (and detailed comparison with Monte Carlo simulations) is still desired. This one-year program was extended by six months (no-cost extension) to cover the summer of 1999, when undergraduate student support was available to complete some of the laboratory testing.
An Introduction to Computational Physics - 2nd Edition
NASA Astrophysics Data System (ADS)
Pang, Tao
2006-01-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
2012-03-22
with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems
Assessing Disease Class-Specific Diagnostic Ability: A Practical Adaptive Test Approach.
ERIC Educational Resources Information Center
Papa, Frank J.; Schumacker, Randall E.
Measures of the robustness of disease class-specific diagnostic concepts could play a central role in training programs designed to assure the development of diagnostic competence. In the pilot study, the authors used disease/sign-symptom conditional probability estimates, Monte Carlo procedures, and artificial intelligence (AI) tools to create…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
NASA Astrophysics Data System (ADS)
Carpenter, Matthew H.; Jernigan, J. G.
2007-05-01
We present examples of an analysis progression consisting of a synthesis of the Photon Clean Method (Carpenter, Jernigan, Brown, Beiersdorfer 2007) and bootstrap methods to quantify errors and variations in many-parameter models. The Photon Clean Method (PCM) works well for model spaces with large numbers of parameters proportional to the number of photons, therefore a Monte Carlo paradigm is a natural numerical approach. Consequently, PCM, an "inverse Monte-Carlo" method, requires a new approach for quantifying errors as compared to common analysis methods for fitting models of low dimensionality. This presentation will explore the methodology and presentation of analysis results derived from a variety of public data sets, including observations with XMM-Newton, Chandra, and other NASA missions. Special attention is given to the visualization of both data and models including dynamic interactive presentations. This work was performed under the auspices of the Department of Energy under contract No. W-7405-Eng-48. We thank Peter Beiersdorfer and Greg Brown for their support of this technical portion of a larger program related to science with the LLNL EBIT program.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less
A measurement of global event shape distributions in the hadronic decays of the Z 0
NASA Astrophysics Data System (ADS)
Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, L. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Debu, P.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gaidot, A.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grosse-Wiesmann, P.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; von Krogh, J.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Lasota, M. M. B.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Lupu, N.; Ma, J.; MacBeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Pritchard, P. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; von der Schmitt, H.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Spreadbury, E. J.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; van den Plas, D.; Vandalen, G. J.; Vasseur, G.; Virtue, C. J.; Wagner, A.; Wahl, C.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.
1990-12-01
We present measurements of global event shape distributions in the hadronic decays of the Z 0. The data sample, corresponding to an integrated luminosity of about 1.3 pb-1, was collected with the OPAL detector at LEP. Most of the experimental distributions we present are unfolded for the finite acceptance and resolution of the OPAL detector. Through comparison with our unfolded data, we tune the parameter values of several Monte Carlo computer programs which simulate perturbative QCD and the hadronization of partons. Jetset version 7.2, Herwig version 3.4 and Ariadne version 3.1 all provide good descriptions of the experimental distributions. They in addition describe lower energy data with the parameter values adjusted at the Z 0 energy. A complete second order matrix element Monte Carlo program with a modified perturbation scale is also compared to our 91 GeV data and its parameter values are adjusted. We obtained an unfolded value for the mean charged multiplicity of 21.28±0.04±0.84, where the first error is statistical and the second is systematic.
NASA Astrophysics Data System (ADS)
Jawad, Enas A.
2018-05-01
In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.
A Monte Carlo model for the gardening of the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
The processes of movement and turnover of the lunar regolith are described by a Monte Carlo model. The movement of material by the direct cratering process is the dominant mode, but slumping is also included for angles exceeding the static angle of repose. Using a group of interrelated computer programs, a large number of properties are calculated, including topography, formation of layers, depth of the disturbed layer, nuclear-track distributions, and cosmogenic nuclides. In the most complex program, the history of a 36-point square array is followed for times up to 400 million years. The histories generated are complex and exhibit great variety. Because a crater covers much less area than its ejecta blanket, there is a tendency for the height change at a test point to exhibit periods of slow accumulation followed by sudden excavation. In general, the agreement with experiment and observation seems good, but two areas of disagreement stand out. First, the calculated surface is rougher than that observed. Second, the observed bombardment ages, of the order 400 million are shorter than expected (by perhaps a factor of 5).
QuTiP: An open-source Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2012-08-01
We present an object-oriented open-source framework for solving the dynamics of open quantum systems written in Python. Arbitrary Hamiltonians, including time-dependent systems, may be built up from operators and states defined by a quantum object class, and then passed on to a choice of master equation or Monte Carlo solvers. We give an overview of the basic structure for the framework before detailing the numerical simulation of open system dynamics. Several examples are given to illustrate the build up to a complete calculation. Finally, we measure the performance of our library against that of current implementations. The framework described here is particularly well suited to the fields of quantum optics, superconducting circuit devices, nanomechanics, and trapped ions, while also being ideal for use in classroom instruction. Catalogue identifier: AEMB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 16 482 No. of bytes in distributed program, including test data, etc.: 213 438 Distribution format: tar.gz Programming language: Python Computer: i386, x86-64 Operating system: Linux, Mac OSX, Windows RAM: 2+ Gigabytes Classification: 7 External routines: NumPy (http://numpy.scipy.org/), SciPy (http://www.scipy.org/), Matplotlib (http://matplotlib.sourceforge.net/) Nature of problem: Dynamics of open quantum systems. Solution method: Numerical solutions to Lindblad master equation or Monte Carlo wave function method. Restrictions: Problems must meet the criteria for using the master equation in Lindblad form. Running time: A few seconds up to several tens of minutes, depending on size of underlying Hilbert space.
NASA Astrophysics Data System (ADS)
Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.
2013-10-01
Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
DPEMC: A Monte Carlo for double diffraction
NASA Astrophysics Data System (ADS)
Boonekamp, M.; Kúcs, T.
2005-05-01
We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur. Phys. J. C 23 (2002) 311]. This program implements some of the more significant ones, enabling the simulation of central particle production through color singlet exchange between interacting protons or antiprotons. Method of solution: The Monte Carlo method is used to simulate all elementary 2→2 and 2→1 processes available in HERWIG. The color singlet exchanges implemented in DPEMC are implemented as functions reweighting the photon flux already present in HERWIG. Restriction on the complexity of the problem: The program relying extensively on HERWIG, the limitations are the same as in [G. Marchesini, B.R. Webber, G. Abbiendi, I.G. Knowles, M.H. Seymour, L. Stanco, Comput. Phys. Comm. 67 (1992) 465; G. Corcella, I.G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M. Seymour, B. Webber, JHEP 0101 (2001) 010]. Typical running time: Approximate times on a 800 MHz Pentium III: 5-20 min per 10 000 unweighted events, depending on the process under consideration.
Elegent—An elastic event generator
NASA Astrophysics Data System (ADS)
Kašpar, J.
2014-03-01
Although elastic scattering of nucleons may look like a simple process, it presents a long-lasting challenge for theory. Due to missing hard energy scale, the perturbative QCD cannot be applied. Instead, many phenomenological/theoretical models have emerged. In this paper we present a unified implementation of some of the most prominent models in a C++ library, moreover extended to account for effects of the electromagnetic interaction. The library is complemented with a number of utilities. For instance, programs to sample many distributions of interest in four-momentum transfer squared, t, impact parameter, b, and collision energy √{s}. These distributions at ISR, Spp¯S, RHIC, Tevatron and LHC energies are available for download from the project web site. Both in the form of ROOT files and PDF figures providing comparisons among the models. The package includes also a tool for Monte-Carlo generation of elastic scattering events, which can easily be embedded in any other program framework. Catalogue identifier: AERT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERT_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 10551 No. of bytes in distributed program, including test data, etc.: 126316 Distribution format: tar.gz Programming language: C++. Computer: Any in principle, tested on x86-64 architecture. Operating system: Any in principle, tested on GNU/Linux. RAM: Strongly depends on the task, but typically below 20MB Classification: 11.6. External routines: ROOT, HepMC Nature of problem: Monte-Carlo simulation of elastic nucleon-nucleon collisions Solution method: Implementation of some of the most prominent phenomenological/theoretical models providing cumulative distribution function that is used for random event generation. Running time: Strongly depends on the task, but typically below 1 h.
NASA Astrophysics Data System (ADS)
Mimasu, Ken; Sanz, Verónica; Williams, Ciaran
2016-08-01
We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.
Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’
NASA Astrophysics Data System (ADS)
Yegin, Gultekin
2018-02-01
In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.
Air shower simulation for background estimation in muon tomography of volcanoes
NASA Astrophysics Data System (ADS)
Béné, S.; Boivin, P.; Busato, E.; Cârloganu, C.; Combaret, C.; Dupieux, P.; Fehr, F.; Gay, P.; Labazuy, P.; Laktineh, I.; Lénat, J.-F.; Miallier, D.; Mirabito, L.; Niess, V.; Portal, A.; Vulpescu, B.
2013-01-01
One of the main sources of background for the radiography of volcanoes using atmospheric muons comes from the accidental coincidences produced in the muon telescopes by charged particles belonging to the air shower generated by the primary cosmic ray. In order to quantify this background effect, Monte Carlo simulations of the showers and of the detector are developed by the TOMUVOL collaboration. As a first step, the atmospheric showers were simulated and investigated using two Monte Carlo packages, CORSIKA and GEANT4. We compared the results provided by the two programs for the muonic component of vertical proton-induced showers at three energies: 1, 10 and 100 TeV. We found that the spatial distribution and energy spectrum of the muons were in good agreement for the two codes.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus. PMID:18466597
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamberto, M; Chen, H; Huang, K
2015-06-15
Purpose To characterize the Cyberknife (CK) robotic system’s dosimetric accuracy of the delivery of MultiPlan’s Monte Carlo dose calculations using EBT3 radiochromic film inserted in a thorax phantom. Methods The CIRS XSight Lung Tracking (XLT) Phantom (model 10823) was used in this study with custom cut EBT3 film inserted in the horizontal (coronal) plane inside the lung tissue equivalent phantom. CK MultiPlan v3.5.3 with Monte Carlo dose calculation algorithm (1.5 mm grid size, 2% statistical uncertainty) was used to calculate a clinical plan for a 25-mm lung tumor lesion, as contoured by the physician, and then imported onto the XLTmore » phantom CT. Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0– 800 cGy. The test films (n=3) were irradiated using 325 cGy to the prescription point. Films were scanned 48 hours after irradiation using an Epson v700 scanner (48 bits color scan, extracted red channel only, 96 dpi). Percent absolute dose and relative isodose distribution difference relative to the planned dose were quantified using an in-house QA software program. Multiplan Monte Carlo dose calculation was validated using RCF dosimetry (EBT3) and gamma index criteria of 3%/3mm and 2%/2mm for absolute dose and relative isodose distribution measurement comparisons. Results EBT3 film measurements of the patient plans calculated with Monte Carlo in MultiPlan resulted in an absolute dose passing rate of 99.6±0.4% for the Gamma Index of 3%/3mm, 10% dose threshold, and 95.6±4.4% for 2%/2mm, 10% threshold criteria. The measured central axis absolute dose was within 1.2% (329.0±2.5 cGy) of the Monte Carlo planned dose (325.0±6.5 cGy) for that same point. Conclusion MultiPlan’s Monte Carlo dose calculation was validated using the EBT3 film absolute dosimetry for delivery in a heterogeneous thorax phantom.« less
SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies
ERIC Educational Resources Information Center
Yurdugul, Halil
2009-01-01
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
2006-05-31
dynamics (MD) and kinetic Monte Carlo ( KMC ) procedures. In 2D surface modeling our calculations project speedups of 9 orders of magnitude at 300 degrees...programming is used to perform customized statistical mechanics by bridging the different time scales of MD and KMC quickly and well. Speedups in
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
Solid-propellant rocket motor ballistic performance variation analyses
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.
1975-01-01
Results are presented of research aimed at improving the assessment of off-nominal internal ballistic performance including tailoff and thrust imbalance of two large solid-rocket motors (SRMs) firing in parallel. Previous analyses using the Monte Carlo technique were refined to permit evaluation of the effects of radial and circumferential propellant temperature gradients. Sample evaluations of the effect of the temperature gradients are presented. A separate theoretical investigation of the effect of strain rate on the burning rate of propellant indicates that the thermoelastic coupling may cause substantial variations in burning rate during highly transient operating conditions. The Monte Carlo approach was also modified to permit the effects on performance of variation in the characteristics between lots of propellants and other materials to be evaluated. This permits the variabilities for the total SRM population to be determined. A sample case shows, however, that the effect of these between-lot variations on thrust imbalances within pairs of SRMs is minor in compariosn to the effect of the within-lot variations. The revised Monte Carlo and design analysis computer programs along with instructions including format requirements for preparation of input data and illustrative examples are presented.
A Multi-Objective Optimization Technique to Model the Pareto Front of Organic Dielectric Polymers
NASA Astrophysics Data System (ADS)
Gubernatis, J. E.; Mannodi-Kanakkithodi, A.; Ramprasad, R.; Pilania, G.; Lookman, T.
Multi-objective optimization is an area of decision making that is concerned with mathematical optimization problems involving more than one objective simultaneously. Here we describe two new Monte Carlo methods for this type of optimization in the context of their application to the problem of designing polymers with more desirable dielectric and optical properties. We present results of applying these Monte Carlo methods to a two-objective problem (maximizing the total static band dielectric constant and energy gap) and a three objective problem (maximizing the ionic and electronic contributions to the static band dielectric constant and energy gap) of a 6-block organic polymer. Our objective functions were constructed from high throughput DFT calculations of 4-block polymers, following the method of Sharma et al., Nature Communications 5, 4845 (2014) and Mannodi-Kanakkithodi et al., Scientific Reports, submitted. Our high throughput and Monte Carlo methods of analysis extend to general N-block organic polymers. This work was supported in part by the LDRD DR program of the Los Alamos National Laboratory and in part by a Multidisciplinary University Research Initiative (MURI) Grant from the Office of Naval Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1988-03-01
This document provides a complete listing of the FORTRAN progran SCINFUL, a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The incident design neutron energy range is 0.1 to 80 MeV. Preparation of input to the program is discussed as are important features of the output. Also included is a FORTRAN listing of a subsidiary program applicable to the output of SCINFUL. This user-interactive program is named SCINSPEC from which the output of SCINFUL may be reformatted into a standard spectrum form involving either equal light-unit or equalmore » protran-energy intervals. Examples of input to this program and corresponding output are given.« less
Aoun, Bachir
2016-05-05
A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.
Aoun, Bachir
2016-01-22
Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoun, Bachir
Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less
MO-FG-BRA-01: 4D Monte Carlo Simulations for Verification of Dose Delivered to a Moving Anatomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gholampourkashi, S; Cygler, J E.; The Ottawa Hospital Cancer Centre, Ottawa, ON
Purpose: To validate 4D Monte Carlo (MC) simulations of dose delivery by an Elekta Agility linear accelerator to a moving phantom. Methods: Monte Carlo simulations were performed using the 4DdefDOSXYZnrc/EGSnrc user code which samples a new geometry for each incident particle and calculates the dose in a continuously moving anatomy. A Quasar respiratory motion phantom with a lung insert containing a 3 cm diameter tumor was used for dose measurements on an Elekta Agility linac with the phantom in stationary and moving states. Dose to the center of tumor was measured using calibrated EBT3 film and the RADPOS 4D dosimetrymore » system. A VMAT plan covering the tumor was created on the static CT scan of the phantom using Monaco V.5.10.02. A validated BEAMnrc model of our Elekta Agility linac was used for Monte Carlo simulations on stationary and moving anatomies. To compare the planned and delivered doses, linac log files recorded during measurements were used for the simulations. For 4D simulations, deformation vectors that modeled the rigid translation of the lung insert were generated as input to the 4DdefDOSXYZnrc code as well as the phantom motion trace recorded with RADPOS during the measurements. Results: Monte Carlo simulations and film measurements were found to agree within 2mm/2% for 97.7% of points in the film in the static phantom and 95.5% in the moving phantom. Dose values based on film and RADPOS measurements are within 2% of each other and within 2σ of experimental uncertainties with respect to simulations. Conclusion: Our 4D Monte Carlo simulation using the defDOSXYZnrc code accurately calculates dose delivered to a moving anatomy. Future work will focus on more investigation of VMAT delivery on a moving phantom to improve the agreement between simulation and measurements, as well as establishing the accuracy of our method in a deforming anatomy. This work was supported by the Ontario Consortium of Adaptive Interventions in Radiation Oncology (OCAIRO), funded by the Ontario Research Fund Research Excellence program.« less
NASA Astrophysics Data System (ADS)
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program
NASA Technical Reports Server (NTRS)
Bream, Bruce L.
1993-01-01
RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.
NASA Astrophysics Data System (ADS)
Sandner, Raimar; Vukics, András
2014-09-01
The v2 Milestone 10 release of C++QED is primarily a feature release, which also corrects some problems of the previous release, especially as regards the build system. The adoption of C++11 features has led to many simplifications in the codebase. A full doxygen-based API manual [1] is now provided together with updated user guides. A largely automated, versatile new testsuite directed both towards computational and physics features allows for quickly spotting arising errors. The states of trajectories are now savable and recoverable with full binary precision, allowing for trajectory continuation regardless of evolution method (single/ensemble Monte Carlo wave-function or Master equation trajectory). As the main new feature, the framework now presents Python bindings to the highest-level programming interface, so that actual simulations for given composite quantum systems can now be performed from Python. Catalogue identifier: AELU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 492422 No. of bytes in distributed program, including test data, etc.: 8070987 Distribution format: tar.gz Programming language: C++/Python. Computer: i386-i686, x86 64. Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2. External routines: Boost C++ libraries, GNU Scientific Library, Blitz++, FLENS, NumPy, SciPy Catalogue identifier of previous version: AELU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1381 Does the new version supersede the previous version?: Yes Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [2,3]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [4] and Monte Carlo wave-function simulation [5]. Solution method: Master equation, Monte Carlo wave-function method Reasons for new version: The new version is mainly a feature release, but it does correct some problems of the previous version, especially as regards the build system. Summary of revisions: We give an example for a typical Python script implementing the ring-cavity system presented in Sec. 3.3 of Ref. [2]: Restrictions: Total dimensionality of the system. Master equation-few thousands. Monte Carlo wave-function trajectory-several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. We use several C++11 features which limits the range of supported compilers (g++ 4.7, clang++ 3.1) Documentation, http://cppqed.sourceforge.net/ Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks. References: [1] Entry point: http://cppqed.sf.net [2] A. Vukics, C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems, Comp. Phys. Comm. 183(2012)1381. [3] A. Vukics, H. Ritsch, C++QED: an object-oriented framework for wave-function simulations of cavity QED systems, Eur. Phys. J. D 44 (2007) 585. [4] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, 1993. [5] J. Dalibard, Y. Castin, K. Molmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68 (1992) 580.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
NRC/AMRMC Resident Research Associateship Program
2015-04-01
Sanchez, Carlos 9/7/2011-9/6/2014 1 Developed in vitrobiofilm assays to test susceptibility to various antimicrobials (and antiseptics) as well as to...utility of combinations of biofilm dispersal agents and antimicrobials as an alternate therapy for targeting bacterial biofilms. 4 Through...collaborative efforts contributed to the development of biomaterials capable of delivering biofilm dispersal agents (alone or in combination with antimicrobials
Demand Forecasting: DLA’S Aviation Supply Chain High Value Products
2015-04-09
program at USS CONSTELLATION (CV 64), San Diego CA LCDR Carlos Lopez Education MBA in Supply Chain Management, Naval Postgraduate School BS in...Exponential Smoothing Forecasts ............... 118 xv Figure 80. NIIN 01-463-4340 Seasonal Exponential Smoothing Forecast .............. 119 Figure...5310 Seasonal Exponential Smoothing ............................ 142 Figure 102. NIIN 01-507-5310 12-Month Forecast Simulation
The Fixed Target Experiment for Studies of Baryonic Matter at the Nuclotron (BM@N)
NASA Astrophysics Data System (ADS)
Kapishin, M. N.
2017-12-01
BM@N (Baryonic Matter at Nuclotron) is the first experiment to be realized at the NICA-Nuclotron accelerator complex. The aim of the BM@N experiment is to study relativistic heavy ion beam interactions with fixed targets. The BM@N setup, results of Monte Carlo simulations, and the BM@N experimental program are presented.
DOT National Transportation Integrated Search
1978-05-01
The User Delay Cost Model (UDCM) is a Monte Carlo simulation of certain classes of movement of air traffic in the Boston Terminal Control Area (TCA). It incorporates a weather module, an aircraft generation module, a facilities module, and an air con...
Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.
2016-03-01
The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.
NASA Astrophysics Data System (ADS)
De Lucas, Javier
2015-03-01
A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.
Investigation of laser Doppler techniques using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Ruetten, Walter; Gellekum, Thomas; Jessen, Katrin
1995-01-01
Laser Doppler techniques are increasingly used in research and clinical applications to study perfusion phenomena in the skin, yet the influences of changing scattering parameters and geometry on the measure of perfusion are not well explored. To investigate these influences, a simulation program based on the Monte Carlo method was developed, which is capable of determining the Doppler spectra caused by moving red blood cells. The simulation model allows for the definition of arbitrary networks of blood vessels with individual velocities. The volume is represented by a voxel tree with adaptive spatial resolution which contains references to the optical properties and is used to store the location dependent photon fluence determined during the simulation. Two evaluation methods for Doppler spectra from biological tissue described in the literate were investigated with the simulation program. The results obtained suggest that both methods give a measure of perfusion nearly proportional to the velocity of the red blood cells. However, simulations done with different geometries of the blood vessels seem to indicate a nonlinear behavior concerning the concentration of red blood cells in the measurement volume. Nevertheless these simulation results may help in the interpretation of measurements obtained from devices using the investigated evaluation methods.
EGRET High Energy Capability and Multiwavelength Flare Studies and Solar Flare Proton Spectra
NASA Technical Reports Server (NTRS)
Chupp, Edward L.
1997-01-01
UNH was assigned the responsibility to use their accelerator neutron measurements to verify the TASC response function and to modify the TASC fitting program to include a high energy neutron contribution. Direct accelerator-based measurements by UNH of the energy-dependent efficiencies for detecting neutrons with energies from 36 to 720 MeV in NaI were compared with Monte Carlo TASC calculations. The calculated TASC efficiencies are somewhat lower (by about 20%) than the accelerator results in the energy range 70-300 MeV. The measured energy-loss spectrum for 207 MeV neutron interactions in NaI were compared with the Monte Carlo response for 200 MeV neutrons in the TASC indicating good agreement. Based on this agreement, the simulation was considered to be sufficiently accurate to generate a neutron response library to be used by UNH in modifying the TASC fitting program to include a neutron component in the flare spectrum modeling. TASC energy-loss data on the 1991 June 11 flare was transferred to UNH. Also included appendix: Gamma-rays and neutrons as a probe of flare proton spectra: the solar flare of 11 June 1991.
Streamlining resummed QCD calculations using Monte Carlo integration
Farhi, David; Feige, Ilya; Freytsis, Marat; ...
2016-08-18
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MadGraph [1], Alpgen [2] or Sherpa [3]. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution.more » These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including e +e – two- and four-jet event shapes, n-jettiness and jet-mass related observables at hadron colliders at next-to-leading-log (NLL) matched to leading order (LO). Furthermore, the attached code can be used to modify MadGraph to export the relevant LO hard functions and color structures for arbitrary processes.« less
Recursive model for the fragmentation of polarized quarks
NASA Astrophysics Data System (ADS)
Kerbizi, A.; Artru, X.; Belghobsi, Z.; Bradamante, F.; Martin, A.
2018-04-01
We present a model for Monte Carlo simulation of the fragmentation of a polarized quark. The model is based on string dynamics and the 3P0 mechanism of quark pair creation at string breaking. The fragmentation is treated as a recursive process, where the splitting function of the subprocess q →h +q' depends on the spin density matrix of the quark q . The 3P0 mechanism is parametrized by a complex mass parameter μ , the imaginary part of which is responsible for single spin asymmetries. The model has been implemented in a Monte Carlo program to simulate jets made of pseudoscalar mesons. Results for single hadron and hadron pair transverse-spin asymmetries are found to be in agreement with experimental data from SIDIS and e+e- annihilation. The model predictions on the jet-handedness are also discussed.
Finding Effective Models in Transition Metals using Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Williams, Kiel; Wagner, Lucas K.
There is a gap between high-accuracy ab-initio calculations, like those produced from Quantum Monte Carlo (QMC), and effective lattice models such as the Hubbard model. We have developed a method that combines data produced from QMC with fitting techniques taken from data science, allowing us to determine which degrees of freedom are required to connect ab-initio and model calculations. We test this approach for transition metal atoms, where spectroscopic reference data exists. We report on the accuracy of several derived effective models that include different degrees of freedom, and comment on the quality of the parameter values we obtain from our fitting procedure. We gratefully acknowledge funding from the National Science Foundation Graduate Research Fellowship Program under Grant Number DGE-1144245 (K.T.W.) and from SciDAC Grant DE-FG02-12ER46875 (L.K.W.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less
NASA Astrophysics Data System (ADS)
Vukics, András
2012-06-01
C++QED is a versatile framework for simulating open quantum dynamics. It allows to build arbitrarily complex quantum systems from elementary free subsystems and interactions, and simulate their time evolution with the available time-evolution drivers. Through this framework, we introduce a design which should be generic for high-level representations of composite quantum systems. It relies heavily on the object-oriented and generic programming paradigms on one hand, and on the other hand, compile-time algorithms, in particular C++ template-metaprogramming techniques. The core of the design is the data structure which represents the state vectors of composite quantum systems. This data structure models the multi-array concept. The use of template metaprogramming is not only crucial to the design, but with its use all computations pertaining to the layout of the simulated system can be shifted to compile time, hence cutting on runtime. Program summaryProgram title: C++QED Catalogue identifier: AELU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:http://cpc.cs.qub.ac.uk/licence/aelu_v1_0.html. The C++QED package contains other software packages, Blitz, Boost and FLENS, all of which may be distributed freely but have individual license requirements. Please see individual packages for license conditions. No. of lines in distributed program, including test data, etc.: 597 974 No. of bytes in distributed program, including test data, etc.: 4 874 839 Distribution format: tar.gz Programming language: C++ Computer: i386-i686, x86_64 Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60 MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1 MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2, 20 External routines: Boost C++ libraries (http://www.boost.org/), GNU Scientific Library (http://www.gnu.org/software/gsl/), Blitz++ (http://www.oonumerics.org/blitz/), Linear Algebra Package - Flexible Library for Efficient Numerical Solutions (http://flens.sourceforge.net/). Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [1]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [2] and Monte Carlo wave-function simulation [3]. Solution method: Master equation, Monte Carlo wave-function method. Restrictions: Total dimensionality of the system. Master equation - few thousands. Monte Carlo wave-function trajectory - several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. Supplementary information: http://cppqed.sourceforge.net/. Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks.
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
NASA Astrophysics Data System (ADS)
Gupta, Sourendu; Majumdar, Pushan
2018-07-01
We present the results of an effort to accelerate a Rational Hybrid Monte Carlo (RHMC) program for lattice quantum chromodynamics (QCD) simulation for 2 flavors of staggered fermions on multiple Kepler K20X GPUs distributed on different nodes of a Cray XC30. We do not use CUDA but adopt a higher level directive based programming approach using the OpenACC platform. The lattice QCD algorithm is known to be bandwidth bound; our timing results illustrate this clearly, and we discuss how this limits the parallelization gains. We achieve more than a factor three speed-up compared to the CPU only MPI program.
NOTE: MCDE: a new Monte Carlo dose engine for IMRT
NASA Astrophysics Data System (ADS)
Reynaert, N.; DeSmedt, B.; Coghe, M.; Paelinck, L.; Van Duyse, B.; DeGersem, W.; DeWagter, C.; DeNeve, W.; Thierens, H.
2004-07-01
A new accurate Monte Carlo code for IMRT dose computations, MCDE (Monte Carlo dose engine), is introduced. MCDE is based on BEAMnrc/DOSXYZnrc and consequently the accurate EGSnrc electron transport. DOSXYZnrc is reprogrammed as a component module for BEAMnrc. In this way both codes are interconnected elegantly, while maintaining the BEAM structure and only minimal changes to BEAMnrc.mortran are necessary. The treatment head of the Elekta SLiplus linear accelerator is modelled in detail. CT grids consisting of up to 200 slices of 512 × 512 voxels can be introduced and up to 100 beams can be handled simultaneously. The beams and CT data are imported from the treatment planning system GRATIS via a DICOM interface. To enable the handling of up to 50 × 106 voxels the system was programmed in Fortran95 to enable dynamic memory management. All region-dependent arrays (dose, statistics, transport arrays) were redefined. A scoring grid was introduced and superimposed on the geometry grid, to be able to limit the number of scoring voxels. The whole system uses approximately 200 MB of RAM and runs on a PC cluster consisting of 38 1.0 GHz processors. A set of in-house made scripts handle the parallellization and the centralization of the Monte Carlo calculations on a server. As an illustration of MCDE, a clinical example is discussed and compared with collapsed cone convolution calculations. At present, the system is still rather slow and is intended to be a tool for reliable verification of IMRT treatment planning in the case of the presence of tissue inhomogeneities such as air cavities.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Brualla, L.
2018-04-01
Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
Simulating supersymmetry at the SSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.M.; Haber, H.E.
1984-08-01
Careful study of supersymmetric signatures at the SSC is required in order to distinguish them from Standard Model physics backgrounds. To this end, we have created an efficient, accurate computer program which simulates supersymmetric particle production and decay (or other new particles). We have incorporated the full matrix elements, keeping track of the polarizations of all intermediate states. (At this time hadronization of final-state partons is ignored). Using Monte Carlo techniques this program can generate any desired final-state distribution or individual events for Lego plots. Examples of the results of our study of supersymmetry at SSC are provided.
The LBM program at the EPFL/LOTUS Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
File, J.; Jassby, D.L.; Tsang, F.Y.
1986-11-01
An experimental program of neutron transport studies of the Lithium Blanket Module (LBM) is being carried out with the LOTUS point-neutron source facility at Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. Preliminary experiments use passive neutron dosimetry within the fuel rods in the LBM central zone, as well as, both thermal extraction and dissolution methods to assay tritium bred in Li/sub 2/O diagnostic wafers and LBM pellets. These measurements are being compared and reconciled with each other and with the predictions of two-dimensional discrete-ordinates and continuous-energy Monte-Carlo analyses of the Lotus/LBM system.
Ultraviolet Communication for Medical Applications
2014-05-01
parent company Imaging Systems Technology (IST) demonstrated feasibility of several key concepts are being developed into a working prototype in the...program using multiple high-end GPUs ( NVIDIA Tesla K20). Finally, the Monte Carlo simulation task will be resumed after the Milestone 2 demonstration...is acceptable for automated printing and handling. Next, the option of having our shells electroded by an external company was investigated and DEI
GPU accelerated Monte-Carlo simulation of SEM images for metrology
NASA Astrophysics Data System (ADS)
Verduin, T.; Lokhorst, S. R.; Hagen, C. W.
2016-03-01
In this work we address the computation times of numerical studies in dimensional metrology. In particular, full Monte-Carlo simulation programs for scanning electron microscopy (SEM) image acquisition are known to be notoriously slow. Our quest in reducing the computation time of SEM image simulation has led us to investigate the use of graphics processing units (GPUs) for metrology. We have succeeded in creating a full Monte-Carlo simulation program for SEM images, which runs entirely on a GPU. The physical scattering models of this GPU simulator are identical to a previous CPU-based simulator, which includes the dielectric function model for inelastic scattering and also refinements for low-voltage SEM applications. As a case study for the performance, we considered the simulated exposure of a complex feature: an isolated silicon line with rough sidewalls located on a at silicon substrate. The surface of the rough feature is decomposed into 408 012 triangles. We have used an exposure dose of 6 mC/cm2, which corresponds to 6 553 600 primary electrons on average (Poisson distributed). We repeat the simulation for various primary electron energies, 300 eV, 500 eV, 800 eV, 1 keV, 3 keV and 5 keV. At first we run the simulation on a GeForce GTX480 from NVIDIA. The very same simulation is duplicated on our CPU-based program, for which we have used an Intel Xeon X5650. Apart from statistics in the simulation, no difference is found between the CPU and GPU simulated results. The GTX480 generates the images (depending on the primary electron energy) 350 to 425 times faster than a single threaded Intel X5650 CPU. Although this is a tremendous speedup, we actually have not reached the maximum throughput because of the limited amount of available memory on the GTX480. Nevertheless, the speedup enables the fast acquisition of simulated SEM images for metrology. We now have the potential to investigate case studies in CD-SEM metrology, which otherwise would take unreasonable amounts of computation time.
Slimani, Faiçal A A; Hamdi, Mahdjoub; Bentourkia, M'hamed
2018-05-01
Monte Carlo (MC) simulation is widely recognized as an important technique to study the physics of particle interactions in nuclear medicine and radiation therapy. There are different codes dedicated to dosimetry applications and widely used today in research or in clinical application, such as MCNP, EGSnrc and Geant4. However, such codes made the physics easier but the programming remains a tedious task even for physicists familiar with computer programming. In this paper we report the development of a new interface GEANT4 Dose And Radiation Interactions (G4DARI) based on GEANT4 for absorbed dose calculation and for particle tracking in humans, small animals and complex phantoms. The calculation of the absorbed dose is performed based on 3D CT human or animal images in DICOM format, from images of phantoms or from solid volumes which can be made from any pure or composite material to be specified by its molecular formula. G4DARI offers menus to the user and tabs to be filled with values or chemical formulas. The interface is described and as application, we show results obtained in a lung tumor in a digital mouse irradiated with seven energy beams, and in a patient with glioblastoma irradiated with five photon beams. In conclusion, G4DARI can be easily used by any researcher without the need to be familiar with computer programming, and it will be freely available as an application package. Copyright © 2018 Elsevier Ltd. All rights reserved.
Method and program product for determining a radiance field in an optical environment
NASA Technical Reports Server (NTRS)
Reinersman, Phillip N. (Inventor); Carder, Kendall L. (Inventor)
2007-01-01
A hybrid method is presented by which Monte Carlo techniques are combined with iterative relaxation techniques to solve the Radiative Transfer Equation in arbitrary one-, two- or three-dimensional optical environments. The optical environments are first divided into contiguous regions, or elements, with Monte Carlo techniques then being employed to determine the optical response function of each type of element. The elements are combined, and the iterative relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. This hybrid model is capable of providing estimates of the under-water light field needed to expedite inspection of ship hulls and port facilities. It is also capable of providing estimates of the subaerial light field for structured, absorbing or non-absorbing environments such as shadows of mountain ranges within and without absorption spectral bands such as water vapor or CO.sub.2 bands.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
Monte Carlo analysis of TRX lattices with ENDF/B version 3 data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J. Jr.
1975-03-01
Four TRX water-moderated lattices of slightly enriched uranium rods have been reanalyzed with consistent ENDF/B Version 3 data by means of the full-range Monte Carlo program RECAP. The following measured lattice parameters were studied: ratio of epithermal-to-thermal $sup 238$U capture, ratio of epithermal- to-thermal $sup 235$U fissions, ration of $sup 238$U captures to $sup 235$U fissions, ratio of $sup 238$U fissions to $sup 235$U fissions, and multiplication factor. In addition to the base calculations, some studies were done to find sensitivity of the TRX lattice parameters to selected variations of cross section data. Finally, additional experimental evidence is afforded bymore » effective $sup 238$U capture integrals for isolated rods. Shielded capture integrals were calculated for $sup 238$U metal and oxide rods. These are compared with other measurements. (auth)« less
Application of Monte-Carlo Analyses for the Microwave Anisotropy Probe (MAP) Mission
NASA Technical Reports Server (NTRS)
Mesarch, Michael A.; Rohrbaugh, David; Schiff, Conrad; Bauer, Frank H. (Technical Monitor)
2001-01-01
The Microwave Anisotropy Probe (MAP) is the third launch in the National Aeronautics and Space Administration's (NASA's) a Medium Class Explorers (MIDEX) program. MAP will measure, in greater detail, the cosmic microwave background radiation from an orbit about the Sun-Earth-Moon L2 Lagrangian point. Maneuvers will be required to transition MAP from it's initial highly elliptical orbit to a lunar encounter which will provide the remaining energy to send MAP out to a lissajous orbit about L2. Monte-Carlo analysis methods were used to evaluate the potential maneuver error sources and determine their effect of the fixed MAP propellant budget. This paper will discuss the results of the analyses on three separate phases of the MAP mission - recovering from launch vehicle errors, responding to phasing loop maneuver errors, and evaluating the effect of maneuver execution errors and orbit determination errors on stationkeeping maneuvers at L2.
Markov Chain Monte Carlo from Lagrangian Dynamics.
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2015-04-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper.
Radiation Modeling with Direct Simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
NASA Astrophysics Data System (ADS)
Gweon, Gey-Hong; Lee, Hee-Sun; Dorsey, Chad; Tinker, Robert; Finzer, William; Damelin, Daniel
2015-03-01
In tracking student learning in on-line learning systems, the Bayesian knowledge tracing (BKT) model is a popular model. However, the model has well-known problems such as the identifiability problem or the empirical degeneracy problem. Understanding of these problems remain unclear and solutions to them remain subjective. Here, we analyze the log data from an online physics learning program with our new model, a Monte Carlo BKT model. With our new approach, we are able to perform a completely unbiased analysis, which can then be used for classifying student learning patterns and performances. Furthermore, a theoretical analysis of the BKT model and our computational work shed new light on the nature of the aforementioned problems. This material is based upon work supported by the National Science Foundation under Grant REC-1147621 and REC-1435470.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Campbell, John M.; Ellis, R. Keith
We present the implementation of several color-singlet final-state processes at Next-to-Next-to Leading Order (NNLO) accuracy in QCD to the publicly available parton-level Monte Carlo program MCFM. Specifically we discuss the processesmore » $$pp\\rightarrow H$$, $$pp\\rightarrow Z$$, $$pp\\rightarrow W$$, $$pp\\rightarrow HZ$$, $$pp\\rightarrow HW$$ and $$pp\\rightarrow\\gamma\\gamma$$. Decays of the unstable bosons are fully included, resulting in a flexible fully differential Monte Carlo code. The NNLO corrections have been calculated using the non-local $N$-jettiness subtraction approach. Special attention is given to the numerical aspects of running MCFM for these processes at this order. Here, we pay particular attention to the systematic uncertainties due to the power corrections induced by the $N$-jettiness regularization scheme and the evaluation time needed to run the hybrid openMP/MPI version of MCFM at NNLO on multi-processor systems.« less
Color-singlet production at NNLO in MCFM
Boughezal, Radja; Campbell, John M.; Ellis, R. Keith; ...
2016-12-30
We present the implementation of several color-singlet final-state processes at Next-to-Next-to Leading Order (NNLO) accuracy in QCD to the publicly available parton-level Monte Carlo program MCFM. Specifically we discuss the processesmore » $$pp\\rightarrow H$$, $$pp\\rightarrow Z$$, $$pp\\rightarrow W$$, $$pp\\rightarrow HZ$$, $$pp\\rightarrow HW$$ and $$pp\\rightarrow\\gamma\\gamma$$. Decays of the unstable bosons are fully included, resulting in a flexible fully differential Monte Carlo code. The NNLO corrections have been calculated using the non-local $N$-jettiness subtraction approach. Special attention is given to the numerical aspects of running MCFM for these processes at this order. Here, we pay particular attention to the systematic uncertainties due to the power corrections induced by the $N$-jettiness regularization scheme and the evaluation time needed to run the hybrid openMP/MPI version of MCFM at NNLO on multi-processor systems.« less
Portable multi-node LQCD Monte Carlo simulations using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.
Zoller, Christian Johannes; Hohmann, Ansgar; Foschum, Florian; Geiger, Simeon; Geiger, Martin; Ertl, Thomas Peter; Kienle, Alwin
2018-06-01
A GPU-based Monte Carlo software (MCtet) was developed to calculate the light propagation in arbitrarily shaped objects, like a human tooth, represented by a tetrahedral mesh. A unique feature of MCtet is a concept to realize different kinds of light-sources illuminating the complex-shaped surface of an object, for which no preprocessing step is needed. With this concept, it is also possible to consider photons leaving a turbid media and reentering again in case of a concave object. The correct implementation was shown by comparison with five other Monte Carlo software packages. A hundredfold acceleration compared with central processing units-based programs was found. MCtet can simulate anisotropic light propagation, e.g., by accounting for scattering at cylindrical structures. The important influence of the anisotropic light propagation, caused, e.g., by the tubules in human dentin, is shown for the transmission spectrum through a tooth. It was found that the sensitivity to a change in the oxygen saturation inside the pulp for transmission spectra is much larger if the tubules are considered. Another "light guiding" effect based on a combination of a low scattering and a high refractive index in enamel is described. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Analytic boosted boson discrimination
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-20
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between thesemore » limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between thesemore » limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less
MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.E.; Baker, M.C.
1999-07-25
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less
LACIE performance predictor final operational capability program description, volume 2
NASA Technical Reports Server (NTRS)
1976-01-01
Given the swath table files, the segment set for one country and cloud cover data, the SAGE program determines how many times and under what conditions each segment is accessed by satellites. The program writes a record for each segment on a data file which contains the pertinent acquisition data. The weather data file can also be generated from a NASA supplied tape. The Segment Acquisition Selector Program (SACS) selects data from the segment reference file based upon data input manually and from a crop window file. It writes the extracted data to a data acquisition file and prints two summary reports. The POUT program reads from associated LACIE files and produces printed reports. The major types of reports that can be produced are: (1) Substrate Reference Data Reports, (2) Population Mean, Standard Deviation and Histogram Reports, (3) Histograms of Monte Carlo Statistics Reports, and (4) Frequency of Sample Segment Acquisitions Reports.
Factors associated with quality of life in elderly undertaking literacy programs
dos Santos, Bruna Rodrigues; Pavarini, Sofia Cristina Iost; Brigola, Allan Gustavo; Orlandi, Fabiana de Souza; Inouye, Keika
2014-01-01
Increased life expectancy has led to a significant number of elderly enrolling on Youth and Adult Education programs (YAE). These individuals leave inactivity and negative aspects of aging in search of opportunities for social inclusion. Objective To evaluate the influence of sociodemographic factors and depressive and cognitive symptoms on quality of life (QL) of elderly attending the YAE of São Carlos city in São Paulo state. Methods A descriptive and quantitative study approved by the Research Ethics Committee of São Carlos Federal University was conducted. The sample comprised all elderly undertaking the YAE literacy program in 2012. The instruments used were the Mini-Mental State Examination (MMSE), Geriatric Depression Scale (GDS), WHOQOL-bref and WHOQOL-old, and a sociodemographic instrument. Results We interviewed 23 elderly, predominantly females (91.3%) in the early stages of old age (69.6%). The number of years of YAE study showed no correlation with cognition scores obtained on the MMSE or with QL domains. However, scores on the GDS had a moderate inverse relationship with total scores for the Physical (p<0.01), Sensory Functioning (p<0.05), Independence (p<0.01), Past, Present and Future Activities (p<0.05), Social Participation (p<0.01), and Intimacy (p<0.05) QV domains, and a strong inversely proportional relationship with the Social Relationships QV domain (p<0.01). Scores attained on the MMSE showed a moderate and direct relationship with total scores on the Independence QL domain (p=0.001). Conclusion Elderly on literacy programs have average quality of life scores. Several QL domains are influenced by depression and cognitive symptoms. PMID:29213899
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).
Evolution of a minimal parallel programming model
Lusk, Ewing; Butler, Ralph; Pieper, Steven C.
2017-04-30
Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less
SPX: The Tenth International Conference on Stochastic Programming
2004-10-01
On structuring energy contract portfolios in competitive markets . Antonio Alonso-Ayuso, Universidad Rey Juan Carlos. (p. 28) 2. Mean-risk optimization ...ThA 8:00-9:30 Ballroom South: Portfolio Optimization Chair: Gerd Infanger, Stanford University 1. The impact of serial correlation of returns on ... the L-shaped method is to approximate the non-linear penalty term in the objective by a linear one . We use the implicit LX
Update 0.2 to "pysimm: A python package for simulation of molecular systems"
NASA Astrophysics Data System (ADS)
Demidov, Alexander G.; Fortunato, Michael E.; Colina, Coray M.
2018-01-01
An update to the pysimm Python molecular simulation API is presented. A major part of the update is the implementation of a new interface with CASSANDRA - a modern, versatile Monte Carlo molecular simulation program. Several significant improvements in the LAMMPS communication module that allow better and more versatile simulation setup are reported as well. An example of an application implementing iterative CASSANDRA-LAMMPS interaction is illustrated.
Applications of Massive Mathematical Computations
1990-04-01
particles from the first principles of QCD . This problem is under intensive numerical study 11-6 using special purpose parallel supercomputers in...several places around the world. The method used here is the Monte Carlo integration for a fixed 3-D plus time lattices . Reliable results are still years...mathematical and theoretical physics, but its most promising applications are in the numerical realization of QCD computations. Our programs for the solution
Remote sensing on Indian and public lands
NASA Technical Reports Server (NTRS)
Torbert, G. B.; Woll, A. M.
1972-01-01
The use of remote sensing techniques by the Bureaus of Indian Affairs and Land Management in planning resource problems, making decisions, writing environmental impact statements, and monitoring their respective programs is investigated. For Indian affairs, data cover the Papago, Fort Apache, San Carlos, and South Dakota Reservations. For the Land Management Office, data cover cadastral surveys, California desert study, range watersheds, and efforts to establish a natural resources information system.
Household water use and conservation models using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.
2013-10-01
The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.
Household water use and conservation models using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.
2013-04-01
The increased availability of water end use measurement studies allows for more mechanistic and detailed approaches to estimating household water demand and conservation potential. This study uses, probability distributions for parameters affecting water use estimated from end use studies and randomly sampled in Monte Carlo iterations to simulate water use in a single-family residential neighborhood. This model represents existing conditions and is calibrated to metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.
NASA Astrophysics Data System (ADS)
Woei Leow, Shin; Corrado, Carley; Osborn, Melissa; Isaacson, Michael; Alers, Glenn; Carter, Sue A.
2013-06-01
Luminescent solar concentrators (LSC) collect ambient light from a broad range of angles and concentrate the captured light onto photovoltaic (PV) cells. LSCs with front-facing cells collect direct and indirect sunlight ensuring a gain factor greater than one. The flexible placement and percentage coverage of PV cells on the LSC panel allow for layout adjustments to be made in order to balance re-absorption losses and the level of light concentration desired. A weighted Monte Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in the LSC to aid in design optimization. The program imports measured absorption/emission spectra of an organic luminescent dye (LR305), the transmission coefficient, and refractive index of acrylic as parameters that describe the system. Simulations suggest that for LR305, 8-10 cm of luminescent material surrounding the PV cell yields the highest increase in power gain per unit area of LSC added, thereby determining the ideal spacing between PV cells in the panel. For rectangular PV cells, results indicate that for each centimeter of PV cell width, an additional increase of 0.15 mm to the waveguide thickness is required to efficiently transport photon collected by the LSC to the PV cell with minimal loss.
Use of Existing CAD Models for Radiation Shielding Analysis
NASA Technical Reports Server (NTRS)
Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.
2015-01-01
The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.
Progress towards an effective model for FeSe from high-accuracy first-principles quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Busemeyer, Brian; Wagner, Lucas K.
While the origin of superconductivity in the iron-based materials is still controversial, the proximity of the superconductivity to magnetic order is suggestive that magnetism may be important. Our previous work has suggested that first-principles Diffusion Monte Carlo (FN-DMC) can capture magnetic properties of iron-based superconductors that density functional theory (DFT) misses, but which are consistent with experiment. We report on the progress of efforts to find simple effective models consistent with the FN-DMC description of the low-lying Hilbert space of the iron-based superconductor, FeSe. We utilize a procedure outlined by Changlani et al.[1], which both produces parameter values and indications of whether the model is a good description of the first-principles Hamiltonian. Using this procedure, we evaluate several models of the magnetic part of the Hilbert space found in the literature, as well as the Hubbard model, and a spin-fermion model. We discuss which interaction parameters are important for this material, and how the material-specific properties give rise to these interactions. U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award No. FG02-12ER46875, as well as the NSF Graduate Research Fellowship Program.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
Toward computer simulation of high-LET in vitro survival curves.
Heuskin, A-C; Michiels, C; Lucas, S
2013-09-21
We developed a Monte Carlo based computer program called MCSC (Monte Carlo Survival Curve) able to predict the survival fraction of cells irradiated in vitro with a broad beam of high linear energy transfer particles. Three types of cell responses are studied: the usual high dose response, the bystander effect and the low-dose hypersensitivity (HRS). The program models the broad beam irradiation and double strand break distribution following Poisson statistics. The progression of cells through the cell cycle is taken into account while the repair takes place. Input parameters are experimentally determined for A549 lung carcinoma cells irradiated with 10 and 20 keV µm(-1) protons, 115 keV µm(-1) alpha particles and for EAhy926 endothelial cells exposed to 115 keV µm(-1) alpha particles. Results of simulations are presented and compared with experimental survival curves obtained for A549 and EAhy296 cells. Results are in good agreement with experimental data for both cell lines and all irradiation protocols. The benefits of MCSC are several: the gain of time that would have been spent performing time-consuming clonogenic assays, the capacity to estimate survival fraction of cell lines not forming colonies and possibly the evaluation of radiosensitivity parameters of given individuals.
Aerocapture Performance Analysis for a Neptune-Triton Exploration Mission
NASA Technical Reports Server (NTRS)
Starr, Brett R.; Westhelle, Carlos H.; Masciarelli, James P.
2004-01-01
A systems analysis has been conducted for a Neptune-Triton Exploration Mission in which aerocapture is used to capture a spacecraft at Neptune. Aerocapture uses aerodynamic drag instead of propulsion to decelerate from the interplanetary approach trajectory to a captured orbit during a single pass through the atmosphere. After capture, propulsion is used to move the spacecraft from the initial captured orbit to the desired science orbit. A preliminary assessment identified that a spacecraft with a lift to drag ratio of 0.8 was required for aerocapture. Performance analyses of the 0.8 L/D vehicle were performed using a high fidelity flight simulation within a Monte Carlo executive to determine mission success statistics. The simulation was the Program to Optimize Simulated Trajectories (POST) modified to include Neptune specific atmospheric and planet models, spacecraft aerodynamic characteristics, and interplanetary trajectory models. To these were added autonomous guidance and pseudo flight controller models. The Monte Carlo analyses incorporated approach trajectory delivery errors, aerodynamic characteristics uncertainties, and atmospheric density variations. Monte Carlo analyses were performed for a reference set of uncertainties and sets of uncertainties modified to produce increased and reduced atmospheric variability. For the reference uncertainties, the 0.8 L/D flatbottom ellipsled vehicle achieves 100% successful capture and has a 99.87 probability of attaining the science orbit with a 360 m/s V budget for apoapsis and periapsis adjustment. Monte Carlo analyses were also performed for a guidance system that modulates both bank angle and angle of attack with the reference set of uncertainties. An alpha and bank modulation guidance system reduces the 99.87 percentile DELTA V 173 m/s (48%) to 187 m/s for the reference set of uncertainties.
PHANTOM: A Monte Carlo event generator for six parton final states at high energy colliders
NASA Astrophysics Data System (ADS)
Ballestrero, Alessandro; Belhouari, Aissa; Bevilacqua, Giuseppe; Kashkan, Vladimir; Maina, Ezio
2009-03-01
PHANTOM is a tree level Monte Carlo for six parton final states at proton-proton, proton-antiproton and electron-positron colliders at O(αEM6) and O(αEM4αS2) including possible interferences between the two sets of diagrams. This comprehends all purely electroweak contributions as well as all contributions with one virtual or two external gluons. It can generate unweighted events for any set of processes and it is interfaced to parton shower and hadronization packages via the latest Les Houches Accord protocol. It can be used to analyze the physics of boson-boson scattering, Higgs boson production in boson-boson fusion, tt¯ and three boson production. Program summaryProgram title:PHANTOM (V. 1.0) Catalogue identifier: AECE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 175 787 No. of bytes in distributed program, including test data, etc.: 965 898 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any with a UNIX, LINUX compatible Fortran compiler Operating system: UNIX, LINUX RAM: 500 MB Classification: 11.1 External routines: LHAPDF (Les Houches Accord PDF Interface, http://projects.hepforge.org/lhapdf/), CIRCE (beamstrahlung for ee ILC collider). Nature of problem: Six fermion final state processes have become important with the increase of collider energies and are essential for the study of top, Higgs and electroweak symmetry breaking physics at high energy colliders. Since thousands of Feynman diagrams contribute in a single process and events corresponding to hundreds of different final states need to be generated, a fast and stable calculation is needed. Solution method:PHANTOM is a tree level Monte Carlo for six parton final states at proton-proton, proton-antiproton and electron-positron colliders. It computes all amplitudes at O(αEM6) and O(αEM4αs2) including possible interferences between the two sets of diagrams. The matrix elements are computed with the helicity formalism implemented in the program PHACT [1]. The integration makes use of an iterative-adaptive multichannel method which, relying on adaptivity, allows the use of only a few channels per process. Unweighted event generation can be performed for any set of processes and it is interfaced to parton shower and hadronization packages via the latest Les Houches Accord protocol. Restrictions: All Feynman diagrams are computed al LO. Unusual features: Phantom is written in Fortran 77 but it makes use of structures. The g77 compiler cannot compile it as it does not recognize the structures. The Intel, Portland Group, True64 HP Fortran 77 or Fortran 90 compilers have been tested and can be used. Running time: A few hours for a cross section integration of one process at per mille accuracy. One hour for one thousand unweighted events. References:A. Ballestrero, E. Maina, Phys. Lett. B 350 (1995) 225, hep-ph/9403244; A. Ballestrero, PHACT 1.0, Program for helicity amplitudes Calculations with Tau matrices, hep-ph/9911318, in: B.B. Levchenko, V.I. Savrin (Eds.), Proceedings of the 14th International Workshop on High Energy Physics and Quantum Field Theory (QFTHEP 99), SINP MSU, Moscow, p. 303.
Vega roll and attitude control system algorithms trade-off study
NASA Astrophysics Data System (ADS)
Paulino, N.; Cuciniello, G.; Cruciani, I.; Corraro, F.; Spallotta, D.; Nebula, F.
2013-12-01
This paper describes the trade-off study for the selection of the most suitable algorithms for the Roll and Attitude Control System (RACS) within the FPS-A program, aimed at developing the new Flight Program Software of VEGA Launcher. Two algorithms were analyzed: Switching Lines (SL) and Quaternion Feedback Regulation. Using a development simulation tool that models two critical flight phases (Long Coasting Phase (LCP) and Payload Release (PLR) Phase), both algorithms were assessed with Monte Carlo batch simulations for both of the phases. The statistical outcomes of the results demonstrate a 100 percent success rate for Quaternion Feedback Regulation, and support the choice of this method.
NASA Astrophysics Data System (ADS)
Dimitroulis, Christos; Raptis, Theophanes; Raptis, Vasilios
2015-12-01
We present an application for the calculation of radial distribution functions for molecular centres of mass, based on trajectories generated by molecular simulation methods (Molecular Dynamics, Monte Carlo). When designing this application, the emphasis was placed on ease of use as well as ease of further development. In its current version, the program can read trajectories generated by the well-known DL_POLY package, but it can be easily extended to handle other formats. It is also very easy to 'hack' the program so it can compute intermolecular radial distribution functions for groups of interaction sites rather than whole molecules.
Space-shuttle interfaces/utilization. Earth Observatory Satellite system definition study (EOS)
NASA Technical Reports Server (NTRS)
1974-01-01
The economic aspects of space shuttle application to a representative Earth Observatory Satellite (EOS) operational mission in the various candidate Shuttle modes of launch, retrieval, and resupply are discussed. System maintenance of the same mission capability using a conventional launch vehicle is also considered. The studies are based on application of sophisticated Monte Carlo mission simulation program developed originally for studies of in-space servicing of a military satellite system. The program has been modified to permit evaluation of space shuttle application to low altitude EOS missions in all three modes. The conclusions generated by the EOS system study are developed.
Final Report for DOE Grant Number DE-SC0001481
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Edison
2013-12-02
This report covers research activities, major results and publications supported by DE-SC-000-1481. This project was funded by the DOE OFES-NNSA HEDLP program. It was a joint research program between Rice University and the University of Texas at Austin. The physics of relativistic plasmas was investigated in the context of ultra-intense laser irradiation of high-Z solid targets. Laser experiments using the Texas Petawatt Laser were performed in the summers of 2011, 2012 and 2013. Numerical simulations of laser-plasma interactions were performed using Monte Carlo and Particle-in-Cell codes to design and support these experiments. Astrophysical applications of these results were also investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Gao, Y
Purpose: Dynamic bowtie filter is an innovative design capable of modulating the X-ray and balancing the flux in the detectors, and it introduces a new way of patient-specific CT scan optimizations. This study demonstrates the feasibility of performing fast Monte Carlo dose calculation for a type of dynamic bowtie filter for cone-beam CT (Liu et al. 2014 9(7) PloS one) using MIC coprocessors. Methods: The dynamic bowtie filter in question consists of a highly attenuating bowtie component (HB) and a weakly attenuating bowtie (WB). The HB is filled with CeCl3 solution and its surface is defined by a transcendental equation.more » The WB is an elliptical cylinder filled with air and immersed in the HB. As the scanner rotates, the orientation of WB remains the same with the static patient. In our Monte Carlo simulation, the HB was approximated by 576 boxes. The phantom was a voxelized elliptical cylinder composed of PMMA and surrounded by air (44cm×44cm×40cm, 1000×1000×1 voxels). The dose to the PMMA phantom was tallied with 0.15% statistical uncertainty under 100 kVp source. Two Monte Carlo codes ARCHER and MCNP-6.1 were compared. Both used double-precision. Compiler flags that may trade accuracy for speed were avoided. Results: The wall time of the simulation was 25.4 seconds by ARCHER on a 5110P MIC, 40 seconds on a X5650 CPU, and 523 seconds by the multithreaded MCNP on the same CPU. The high performance of ARCHER is attributed to the parameterized geometry and vectorization of the program hotspots. Conclusion: The dynamic bowtie filter modeled in this study is able to effectively reduce the dynamic range of the detected signals for the photon-counting detectors. With appropriate software optimization methods, the accelerator-based (MIC and GPU) Monte Carlo dose engines have shown good performance and can contribute to patient-specific CT scan optimizations.« less
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, K; Chen, D. Z; Hu, X. S
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this proceduremore » into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF-1217906, and also in part by a research contract from the Sandia National Laboratories.« less
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
NASA Astrophysics Data System (ADS)
Mei, Donghai; Ge, Qingfeng; Neurock, Matthew; Kieken, Laurent; Lerou, Jan
First-principles-based kinetic Monte Carlo simulation was used to track the elementary surface transformations involved in the catalytic decomposition of NO over Pt(100) and Rh(100) surfaces under lean-burn operating conditions. Density functional theory (DFT) calculations were carried out to establish the structure and energetics for all reactants, intermediates and products over Pt(100) and Rh(100). Lateral interactions which arise from neighbouring adsorbates were calculated by examining changes in the binding energies as a function of coverage and different coadsorbed configurations. These data were fitted to a bond order conservation (BOC) model which is subsequently used to establish the effects of coverage within the simulation. The intrinsic activation barriers for all the elementary reaction steps in the proposed mechanism of NO reduction over Pt(100) were calculated by using DFT. These values are corrected for coverage effects by using the parametrized BOC model internally within the simulation. This enables a site-explicit kinetic Monte Carlo simulation that can follow the kinetics of NO decomposition over Pt(100) and Rh(100) in the presence of excess oxygen. The simulations are used here to model various experimental protocols including temperature programmed desorption as well as batch catalytic kinetics. The simulation results for the temperature programmed desorption and decomposition of NO over Pt(100) and Rh(100) under vacuum condition were found to be in very good agreement with experimental results. NO decomposition is strongly tied to the temporal number of sites that remain vacant. Experimental results show that Pt is active in the catalytic reaction of NO into N2 and NO2 under lean-burn conditions. The simulated reaction orders for NO and O2 were found to be +0.9 and -0.4 at 723 K, respectively. The simulation also indicates that there is no activity over Rh(100) since the surface becomes poisoned by oxygen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamanaka, M; Takashina, M; Kurosu, K
Purpose: In this study we present Monte Carlo based evaluation of the shielding effect for secondary neutrons from patient collimator, and secondary photons emitted in the process of neutron shielding by combination of moderator and boron-10 placed around patient collimator. Methods: The PHITS Monte Carlo Simulation radiation transport code was used to simulate the proton beam (Ep = 64 to 93 MeV) from a proton therapy facility. In this study, moderators (water, polyethylene and paraffin) and boron (pure {sup 10}B) were placed around patient collimator in this order. The rate of moderator and boron thicknesses was changed fixing the totalmore » thickness at 3cm. The secondary neutron and photons doses were evaluated as the ambient dose equivalent per absorbed dose [H*(10)/D]. Results: The secondary neutrons are shielded more effectively by combination moderators and boron. The most effective combination of shielding neutrons is the polyethylene of 2.4 cm thick and the boron of 0.6 cm thick and the maximum reduction rate is 47.3 %. The H*(10)/D of secondary photons in the control case is less than that of neutrons by two orders of magnitude and the maximum increase of secondary photons is 1.0 µSv/Gy with the polyethylene of 2.8 cm thick and the boron of 0.2 cm thick. Conclusion: The combination of moderators and boron is beneficial for shielding secondary neutrons. Both the secondary photons of control and those emitted in the shielding neutrons are very lower than the secondary neutrons and photon has low RBE in comparison with neutron. Therefore the secondary photons can be ignored in the shielding neutrons.This work was supported by JSPS Core-to-Core Program (No.23003). This work was supported by JSPS Core-to-Core Program (No.23003)« less
Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul
2011-01-01
Recently there have been many studies of power systems with a focus on "New and Renewable Energy" as part of "New Growth Engine Industry" promoted by the Korean government. "New And Renewable Energy"-especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels-is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI's IntelliGrid research program. The European Union (EU), which represents Europe's Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lusk, Ewing; Butler, Ralph; Pieper, Steven C.
Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1977-05-01
HELIAKI is a FORTRAN computer program which simulates the optical/thermal performance of a central receiver solar thermal power plant for the dynamic conversion of solar-generated heat to electricity. The solar power plant which this program simulates consists of a field of individual sun tracking mirror units, or heliostats, redirecting sunlight into a cavity, called the receiver, mounted atop a tower. The program calculates the power retained by that cavity receiver at any point in time or the energy into the receiver over a year's time using a Monte Carlo ray trace technique to solve the multiple integral equations. An artist'smore » concept of this plant is shown.« less
ms2: A molecular simulation tool for thermodynamic properties
NASA Astrophysics Data System (ADS)
Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran
2011-11-01
This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.
PyCCF: Python Cross Correlation Function for reverberation mapping studies
NASA Astrophysics Data System (ADS)
Sun, Mouyuan; Grier, C. J.; Peterson, B. M.
2018-05-01
PyCCF emulates a Fortran program written by B. Peterson for use with reverberation mapping. The code cross correlates two light curves that are unevenly sampled using linear interpolation and measures the peak and centroid of the cross-correlation function. In addition, it is possible to run Monto Carlo iterations using flux randomization and random subset selection (RSS) to produce cross-correlation centroid distributions to estimate the uncertainties in the cross correlation results.
2015-06-18
Engineering Effectiveness Survey. CMU/SEI-2012-SR-009. Carnegie Mellon University. November 2012. Field, Andy. Discovering Statistics Using SPSS , 3rd...enough into the survey to begin answering questions on risk practices. All of the data statistical analysis will be performed using SPSS . Prior to...probabilistically using distributions for likelihood and impact. Statistical methods like Monte Carlo can more comprehensively evaluate the cost and
QCDLoop: A comprehensive framework for one-loop scalar integrals
NASA Astrophysics Data System (ADS)
Carrazza, Stefano; Ellis, R. Keith; Zanderighi, Giulia
2016-12-01
We present a new release of the QCDLoop library based on a modern object-oriented framework. We discuss the available new features such as the extension to the complex masses, the possibility to perform computations in double and quadruple precision simultaneously, and useful caching mechanisms to improve the computational speed. We benchmark the performance of the new library, and provide practical examples of phenomenological implementations by interfacing this new library to Monte Carlo programs.
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
Cast Aluminum Structures Technology (CAST) Phase VI. Technology Transfer.
1980-04-01
and other aspects of the program was provided as follows: o Phase I--Preliminary Design Richard C. Jones o Phase il--Manufacturing Methods Richard G...Christner o Phase Ill--Detailed Design Richard C. Jones o Phase IV--Fabrication of Demonstration Richard G. Christner Articles and Production... Richard C. Jones, assisted by Carlos J. Romero, Christian K. Gunther, Cecil E. Parsons, and Donald D. Goehler; and by Walter Hyler of Battelle Columbus
Elementary and Advanced Computer Projects for the Physics Classroom and Laboratory
1992-12-01
are SPF/PC, MS Word, n3, Symphony, Mathematics, and FORTRAN. The authors’ programs assist data analysis in particular laboratory experiments and make...assist data analysis in particular laboratory experiments and make use of the Monte Carlo and other numerical techniques in computer simulation and...the language of science and engineering in industry and government laboratories (alth..4h C is becoming a powerful competitor ). RM/FORTRAN (cost $400
Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y
1991-07-01
Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.
Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd
2014-09-15
The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianchini, G.; Burgio, N.; Carta, M.
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Severalmore » off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)« less
Evaluation of effective dose with chest digital tomosynthesis system using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kim, Dohyeon; Jo, Byungdu; Lee, Youngjin; Park, Su-Jin; Lee, Dong-Hoon; Kim, Hee-Joung
2015-03-01
Chest digital tomosynthesis (CDT) system has recently been introduced and studied. This system offers the potential to be a substantial improvement over conventional chest radiography for the lung nodule detection and reduces the radiation dose with limited angles. PC-based Monte Carlo program (PCXMC) simulation toolkit (STUK, Helsinki, Finland) is widely used to evaluate radiation dose in CDT system. However, this toolkit has two significant limits. Although PCXMC is not possible to describe a model for every individual patient and does not describe the accurate X-ray beam spectrum, Geant4 Application for Tomographic Emission (GATE) simulation describes the various size of phantom for individual patient and proper X-ray spectrum. However, few studies have been conducted to evaluate effective dose in CDT system with the Monte Carlo simulation toolkit using GATE. The purpose of this study was to evaluate effective dose in virtual infant chest phantom of posterior-anterior (PA) view in CDT system using GATE simulation. We obtained the effective dose at different tube angles by applying dose actor function in GATE simulation which was commonly used to obtain the medical radiation dosimetry. The results indicated that GATE simulation was useful to estimate distribution of absorbed dose. Consequently, we obtained the acceptable distribution of effective dose at each projection. These results indicated that GATE simulation can be alternative method of calculating effective dose in CDT applications.
Explanation of random experiment sheduling and its application to space station analysis
NASA Technical Reports Server (NTRS)
Moore, J. E.
1970-01-01
The capability of the McDonnell-Douglas Phase B space station concept to complete the Blue Book Experiment program is analyzed and the Random experiment program with Resource Impact (REPRI) which was used to generate the data is described. The results indicate that station manpower and electrical power are the two resources which will constrain the amount of the Blue Book program that the station can complete. The station experiment program and its resource requirements are sensitive to levels of manpower and electrical power 13.5 men and 11 kilowatts. Continuous artificial gravity experiments have much less impact on the experiment program than experiments using separate artificial gravity periods. Station storage volume presently allocated for the FPE's and their supplies (1600 cu ft) is more than adequate. The REPRI program uses the Monte Carlo technique to generate a set of feasible experiment schedules for a space station. The schedules are statistically analyzed to determine the impact of the station experiment program resource requirements on the station concept. Also, the sensitivity of the station concept to one or more resources is assessed.
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.
2012-12-01
Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.
Computer Simulation of Electron Positron Annihilation Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, y
2003-10-02
With the launching of the Next Linear Collider coming closer and closer, there is a pressing need for physicists to develop a fully-integrated computer simulation of e{sup +}e{sup -} annihilation process at center-of-mass energy of 1TeV. A simulation program acts as the template for future experiments. Either new physics will be discovered, or current theoretical uncertainties will shrink due to more accurate higher-order radiative correction calculations. The existence of an efficient and accurate simulation will help us understand the new data and validate (or veto) some of the theoretical models developed to explain new physics. It should handle well interfacesmore » between different sectors of physics, e.g., interactions happening at parton levels well above the QCD scale which are described by perturbative QCD, and interactions happening at much lower energy scale, which combine partons into hadrons. Also it should achieve competitive speed in real time when the complexity of the simulation increases. This thesis contributes some tools that will be useful for the development of such simulation programs. We begin our study by the development of a new Monte Carlo algorithm intended to perform efficiently in selecting weight-1 events when multiple parameter dimensions are strongly correlated. The algorithm first seeks to model the peaks of the distribution by features, adapting these features to the function using the EM algorithm. The representation of the distribution provided by these features is then improved using the VEGAS algorithm for the Monte Carlo integration. The two strategies mesh neatly into an effective multi-channel adaptive representation. We then present a new algorithm for the simulation of parton shower processes in high energy QCD. We want to find an algorithm which is free of negative weights, produces its output as a set of exclusive events, and whose total rate exactly matches the full Feynman amplitude calculation. Our strategy is to create the whole QCD shower as a tree structure generated by a multiple Poisson process. Working with the whole shower allows us to include correlations between gluon emissions from different sources. QCD destructive interference is controlled by the implementation of ''angular-ordering,'' as in the HERWIG Monte Carlo program. We discuss methods for systematic improvement of the approach to include higher order QCD effects.« less
NASA Astrophysics Data System (ADS)
Miloichikova, I. A.; Bespalov, V. I.; Krasnykh, A. A.; Stuchebrov, S. G.; Cherepennikov, Yu. M.; Dusaev, R. R.
2018-04-01
Simulation by the Monte Carlo method is widely used to calculate the character of ionizing radiation interaction with substance. A wide variety of programs based on the given method allows users to choose the most suitable package for solving computational problems. In turn, it is important to know exactly restrictions of numerical systems to avoid gross errors. Results of estimation of the feasibility of application of the program PCLab (Computer Laboratory, version 9.9) for numerical simulation of the electron energy distribution absorbed in beryllium, aluminum, gold, and water for industrial, research, and clinical beams are presented. The data obtained using programs ITS and Geant4 being the most popular software packages for solving the given problems and the program PCLab are presented in the graphic form. A comparison and an analysis of the results obtained demonstrate the feasibility of application of the program PCLab for simulation of the absorbed energy distribution and dose of electrons in various materials for energies in the range 1-20 MeV.
NASA Astrophysics Data System (ADS)
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.
GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA
NASA Astrophysics Data System (ADS)
Spiechowicz, J.; Kostur, M.; Machura, L.
2015-06-01
This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.
NASA Astrophysics Data System (ADS)
Was, Z.
2017-06-01
Status of τ lepton decay Monte Carlo generator TAUOLA, its main applications and recent developments are reviewed. It is underlined, that in recent efforts on development of new hadronic currents, the multi-dimensional nature of distributions of the experimental data must be taken with a great care: lesson from comparison and fits to the BaBar and Belle data is recalled. It was found, that as in the past at a time of comparisons with CLEO and ALEPH data, proper fitting, to as detailed as possible representation of the experimental data, is essential for appropriate developments of models of τ decay dynamic. This multi-dimensional nature of distributions is also important for observables where τ leptons are used to constrain experimental data. In later part of the presentation, use of the TAUOLA program for phenomenology of W, Z, H decays at LHC is addressed, in particular in the context of the Higgs boson parity measurements. Some new results, relevant for QED lepton pair emission are mentioned as well.
ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Viterna, Larry A.
1991-01-01
A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
Searching for efficient Markov chain Monte Carlo proposal kernels
Yang, Ziheng; Rodríguez, Carlos E.
2013-01-01
Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600
Survival estimation and the effects of dependency among animals
Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.
1995-01-01
Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.
Uusimäki, Toni; Margaris, Georgios; Trohidou, Kalliopi; Granitzer, Petra; Rumpf, Klemens; Sezen, Meltem; Kothleitner, Gerald
2013-12-07
Magnetite nanoparticles embedded within the pores of a mesoporous silicon template have been characterized using electron tomography. Linear least squares optimization was used to fit an arbitrary ellipsoid to each segmented particle from the three dimensional reconstruction. It was then possible to calculate the demagnetizing factors and the direction of the shape anisotropy easy axis for every particle. The demagnetizing factors, along with the knowledge of spatial and volume distribution of the superparamagnetic nanoparticles, were used as a model for magnetic Monte Carlo simulations, yielding zero field cooling/field cooling and magnetic hysteresis curves, which were compared to the measured ones. Additionally, the local curvature of the magnetite particles' docking site within the mesoporous silicon's surface was obtained in two different ways and a comparison will be given. A new iterative semi-automatic image alignment program was written and the importance of image segmentation for a truly objective analysis is also addressed.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George
2012-01-01
There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure. PMID:22719759
Quantum Monte Carlo Studies of Bulk and Few- or Single-Layer Black Phosphorus
NASA Astrophysics Data System (ADS)
Shulenburger, Luke; Baczewski, Andrew; Zhu, Zhen; Guan, Jie; Tomanek, David
2015-03-01
The electronic and optical properties of phosphorus depend strongly on the structural properties of the material. Given the limited experimental information on the structure of phosphorene, it is natural to turn to electronic structure calculations to provide this information. Unfortunately, given phosphorus' propensity to form layered structures bound by van der Waals interactions, standard density functional theory methods provide results of uncertain accuracy. Recently, it has been demonstrated that Quantum Monte Carlo (QMC) methods achieve high accuracy when applied to solids in which van der Waals forces play a significant role. In this talk, we will present QMC results from our recent calculations on black phosphorus, focusing on the structural and energetic properties of monolayers, bilayers and bulk structures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Mexican-US cooperation on renewable energy programs
NASA Astrophysics Data System (ADS)
Waddle, D. B.
1991-04-01
I traveled to Mexico on April 1 with Pete Smith and Chris Rovero of Oak Ridge Associated Universities on a mission sponsored by the Agency for International Development (AID) to initiate a technical support project with Mexican government and private institutions. We met with officials of AID; the Programa Nacional de Solidaridad; the Laboratorio de Energia Solar; Instituto Investigaciones de Electricidad; Comision Federal de Electricidad; Compania de Luz y Fuerza del Centro; and several private firms to discuss the terms of support. An AIDE Memoire was drafted outlining the scope of future assistance. I traveled to San Jose, Cost Rica on April 7, 1991. There I met with Carlos Rodriguez, manager of Cooperativa Electrica de San Carlos, and Federico Baltodano, the director of BEL Ingenieria to discuss the San Lorenzo hydroelectric project. Details of an engineering and financial analysis outlining the costs and benefits of adding daily pondage to the project were discussed, as was the project development schedule and other issues. I returned to Oak Ridge on April 9, 1991.
Highlights of Transient Plume Impingement Model Validation and Applications
NASA Technical Reports Server (NTRS)
Woronowicz, Michael
2011-01-01
This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.
1983-09-01
duplicate a continuous function on a digital computer, and thus the machine representatic- of the GMA is only a close approximation of the continuous...error process. Thus, the manner in which the GMA process is digitally replicated has an effect on the results of the simulation. The parameterization of...Information Center 2 Cameron Station Alexandria, Virginia 22314 2. Libary , Code 0142 2 Naval Postgraduate School Monterey, California 93943 3. Professor
NASA Technical Reports Server (NTRS)
Truscello, V.
1972-01-01
A major concern in the integration of a radioisotope thermoelectric generator (RTG) with a spacecraft designed to explore the outer planets is the effect of the emitted radiation on the normal operation of scientific instruments. The necessary techniques and tools developed to allow accurate calculation of the neutron and gamma spectrum emanating from the RTG. The specific sources of radiation were identified and quantified. Monte Carlo techniques are then employed to perform the nuclear transport calculations. The results of these studies are presented. An extensive experimental program was initiated to measure the response of a number of scientific components to the nuclear radiation.
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
Neutronic Calculation Analysis for CN HCCB TBM-Set
NASA Astrophysics Data System (ADS)
Cao, Qixiang; Zhao, Fengchao; Zhao, Zhou; Wu, Xinghua; Li, Zaixin; Wang, Xiaoyu; Feng, Kaiming
2015-07-01
Using the Monte Carlo transport code MCNP, neutronic calculation analysis for China helium cooled ceramic breeder test blanket module (CN HCCB TBM) and the associated shield block (together called TBM-set) has been carried out based on the latest design of HCCB TBM-set and C-lite model. Key nuclear responses of HCCB TBM-set, such as the neutron flux, tritium production rate, nuclear heating and radiation damage, have been obtained and discussed. These nuclear performance data can be used as the basic input data for other analyses of HCCB TBM-set, such as thermal-hydraulics, thermal-mechanics and safety analysis. supported by the Major State Basic Research Development Program of China (973 Program) (No. 2013GB108000)
NASA Astrophysics Data System (ADS)
Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.
2013-09-01
Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.
Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics
NASA Astrophysics Data System (ADS)
Ciappina, M. F.; Kirchner, T.; Schulz, M.
2010-04-01
We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double ionization of atoms by ion impact. Conventional theoretical approaches aim at a direct calculation of the corresponding cross sections. This has the important shortcoming that it is difficult to account for the experimental conditions when comparing results to measured data. In contrast, the present code generates theoretical event files of the same type as are obtained in a real experiment. From these event files any type of cross sections can be easily extracted. The theoretical schemes are based on distorted wave formalisms for both processes of interest. Solution method: The codes employ a Monte Carlo Event Generator based on theoretical formalisms to generate event files for both single and double ionization. One of the main advantages of having access to theoretical event files is the possibility of adding the conditions present in real experiments (parameter uncertainties, environmental conditions, etc.) and to incorporate additional physics in the resulting event files (e.g. elastic scattering or other interactions absent in the underlying calculations). Additional comments: The computational time can be dramatically reduced if a large number of processors is used. Since the codes has no communication between processes it is possible to achieve an efficiency of a 100% (this number certainly will be penalized by the queuing waiting time). Running time: Times vary according to the process, single or double ionization, to be simulated, the number of processors and the type of theoretical model. The typical running time is between several hours and up to a few weeks.
NASA Astrophysics Data System (ADS)
Adams, Mike; Smalian, Silva
2017-09-01
For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
(U) Introduction to Monte Carlo Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungerford, Aimee L.
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
A package of Linux scripts for the parallelization of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Badal, Andreu; Sempau, Josep
2006-09-01
Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.
Image based Monte Carlo Modeling for Computational Phantom
NASA Astrophysics Data System (ADS)
Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican
2014-06-01
The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
NASA Astrophysics Data System (ADS)
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, S; Ji, Y; Kim, K
Purpose: A diagnostics Multileaf Collimator (MLC) was designed for diagnostic radiography dose reduction. Monte Carlo simulation was used to evaluate efficiency of shielding material for producing leaves of Multileaf collimator. Material & Methods: The general radiography unit (Rex-650R, Listem, Korea) was modeling with Monte Carlo simulation (MCNPX, LANL, USA) and we used SRS-78 program to calculate the energy spectrum of tube voltage (80, 100, 120 kVp). The shielding materials was SKD 11 alloy tool steel that is composed of 1.6% carbon(C), 0.4% silicon (Si), 0.6% manganese (Mn), 5% chromium (Cr), 1% molybdenum (Mo), and vanadium (V). The density of itmore » was 7.89 g/m3. We simulated leafs diagnostic MLC using SKD 11 with general radiography unit. We calculated efficiency of diagnostic MLC using tally6 card of MCNPX depending on energy. Results: The diagnostic MLC consisted of 25 individual metal shielding leaves on both sides, with dimensions of 10 × 0.5 × 0.5 cm3. The leaves of MLC were controlled by motors positioned on both sides of the MLC. According to energy (tube voltage), the shielding efficiency of MLC in Monte Carlo simulation was 99% (80 kVp), 96% (100 kVp) and 93% (120 kVp). Conclusion: We certified efficiency of diagnostic MLC fabricated from SKD11 alloy tool steel. Based on the results, the diagnostic MLC was designed. We will make the diagnostic MLC for dose reduction of diagnostic radiography.« less
Dosimetry of 192Ir sources used for endovascular brachytherapy
NASA Astrophysics Data System (ADS)
Reynaert, N.; Van Eijkeren, M.; Taeymans, Y.; Thierens, H.
2001-02-01
An in-phantom calibration technique for 192Ir sources used for endovascular brachytherapy is presented. Three different source lengths were investigated. The calibration was performed in a solid phantom using a Farmer-type ionization chamber at source to detector distances ranging from 1 cm to 5 cm. The dosimetry protocol for medium-energy x-rays extended with a volume-averaging correction factor was used to convert the chamber reading to dose to water. The air kerma strength of the sources was determined as well. EGS4 Monte Carlo calculations were performed to determine the depth dose distribution at distances ranging from 0.6 mm to 10 cm from the source centre. In this way we were able to convert the absolute dose rate at 1 cm distance to the reference point chosen at 2 mm distance. The Monte Carlo results were confirmed by radiochromic film measurements, performed with a double-exposure technique. The dwell times to deliver a dose of 14 Gy at the reference point were determined and compared with results given by the source supplier (CORDIS). They determined the dwell times from a Sievert integration technique based on the source activity. The results from both methods agreed to within 2% for the 12 sources that were evaluated. A Visual Basic routine that superimposes dose distributions, based on the Monte Carlo calculations and the in-phantom calibration, onto intravascular ultrasound images is presented. This routine can be used as an online treatment planning program.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less
2017-06-09
DETERMINING POTENTIAL IN THE ARMY’S OFFICER CORPS: LEVERAGING TECHNOLOGY TO MANAGE AND PROMOTE ACTIVE DUTY CAPTAINS BASED ON MERIT...Determining Potential in the Army’s Officer Corps: Leveraging Technology to Manage and Promote Active Duty Captains Based on Merit 5a. CONTRACT...NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Major Ross Carlos Pixler 5d. PROJECT NUMBER 5e. TASK NUMBER 5f
Technology Development Risk Assessment for Space Transportation Systems
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Godsell, Aga M.; Go, Susie
2006-01-01
A new approach for assessing development risk associated with technology development projects is presented. The method represents technology evolution in terms of sector-specific discrete development stages. A Monte Carlo simulation is used to generate development probability distributions based on statistical models of the discrete transitions. Development risk is derived from the resulting probability distributions and specific program requirements. Two sample cases are discussed to illustrate the approach, a single rocket engine development and a three-technology space transportation portfolio.
pytc: Open-Source Python Software for Global Analyses of Isothermal Titration Calorimetry Data.
Duvvuri, Hiranmayi; Wheeler, Lucas C; Harms, Michael J
2018-05-08
Here we describe pytc, an open-source Python package for global fits of thermodynamic models to multiple isothermal titration calorimetry experiments. Key features include simplicity, the ability to implement new thermodynamic models, a robust maximum likelihood fitter, a fast Bayesian Markov-Chain Monte Carlo sampler, rigorous implementation, extensive documentation, and full cross-platform compatibility. pytc fitting can be done using an application program interface or via a graphical user interface. It is available for download at https://github.com/harmslab/pytc .
CHARYBDIS: a black hole event generator
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Richardson, Peter; Webber, Bryan R.
2003-08-01
CHARYBDIS is an event generator which simulates the production and decay of miniature black holes at hadronic colliders as might be possible in certain extra dimension models. It interfaces via the Les Houches accord to general purpose Monte Carlo programs like HERWIG and PYTHIA which then perform the parton evolution and hadronization. The event generator includes the extra-dimensional `grey-body' effects as well as the change in the temperature of the black hole as the decay progresses. Various options for modelling the Planck-scale terminal decay are provided.
Asymmetry distributions and mass effects in dijet events at a polarized HERA
NASA Astrophysics Data System (ADS)
Maul, M.; Schäfer, A.; Mirkes, E.; Rädel, G.
1998-09-01
The asymmetry distributions for several kinematic variables are considered for finding a systematic way to maximize the signal for the extraction of the polarized gluon density. The relevance of mass effects for the corresponding dijet cross section is discussed and the different approximations for including mass effects are compared. We also compare via the programs Pepsi and Mepjet two different Monte Carlo (MC) approaches for simulating the expected signal in the dijet asymmetry at a polarized HERA.
United States Naval Hospital Ship Program: History, Evolution, and Configuration Management.
1987-12-01
named the CARLOS CHAGAS and was built in Rio de Janeiro in 1984. The basic ship configuration characteristics are as follows: dimension, in ’ feet: 154.2...of 2,500 cubic feet. The ship’s medical missions include trips to countries in the Caribbean Basin and the Amazon River in Brazil . In contrast to...20 D;S’RIBUTION , AVAILABILITY OF ABSTRACT 2’ ABSTRACT SE RITY CASSIFICATION CXNCLASSIVIEDUNLMFi’ED - SAME AS RPT - Z)TIC USERS UNCL 22a NAME O
2017-01-01
Foreword It gives me pleasure to introduce the 4th edition of the EGS Guidelines. The Third edition proved to be extremely successful, being translated into 7 languages with over 70000 copies being distributed across Europe; it has been downloadable, free, as a pdf file for the past 4 years. As one of the main objectives of the European Glaucoma Society has been to both educate and standardize glaucoma practice within the EU, these guidelines were structured so as to play their part. Glaucoma is a living specialty, with new ideas on causation, mechanisms and treatments constantly appearing. As a number of years have passed since the publication of the last edition, changes in some if not all of these ideas would be expected. For this new edition of the guidelines a number of editorial teams were created, each with responsibility for an area within the specialty; updating where necessary, introducing new diagrams and Flowcharts and ensuring that references were up to date. Each team had writers previously involved with the last edition as well as newer and younger members being co-opted. As soon as specific sections were completed they had further editorial comment to ensure cross referencing and style continuity with other sections. Overall guidance was the responsibility of Anders Heijl and Carlo Traverso. Tribute must be made to the Task Force whose efforts made the timely publication of the new edition possible. Roger Hitchings Chairman of the EGS Foundation www.eugs.org The Guidelines Writers and Contributors Augusto Azuara Blanco Luca Bagnasco Alessandro Bagnis Keith Barton Christoph Baudouin Boel Bengtsson Alain Bron Francesca Cordeiro Barbara Cvenkel Philippe Denis Christoph Faschinger Panayiota Founti Stefano Gandolfi David Garway Heath Francisco Goni Franz Grehn Anders Heijl Roger Hitchings Gabor Hollo Tony Hommer Michele Iester Jost Jonas Yves Lachkar Giorgio Marchini Frances Meier Gibbons Stefano Miglior Marta Misiuk-Hojo Maria Musolino Jean Philippe Nordmann Norbert Pfeiffer Luis Abegao Pinto Luca Rossetti John Salmon Leo Schmetterer Riccardo Scotto Tarek Shaarawy Ingeborg Stalmans Gordana Sunaric Megevand Ernst Tamm John Thygesen Fotis Topouzis Carlo Enrico Traverso Anja Tuulonen Ananth Viswanathan Thierry Zeyen The Guidelines Task Force Luca Bagnasco Anders Heijl Carlo Enrico Traverso Augusto Azuara Blanco Alessandro Bagnis David Garway Heath Michele Iester Yves Lachkar Ingeborg Stalmans Gordana Sunaric Mégevand Fotis Topouzis Anja Tuulonen Ananth Viswanathan The EGS Executive Committee Carlo Enrico Traverso (President) Anja Tuulonen (Vice President) Roger Hitchings (Past President) Anton Hommer (Treasurer) Barbara Cvenkel Julian Garcia Feijoo David Garway Heath Norbert Pfeiffer Ingeborg Stalmans The Board of the European Glaucoma Society Foundation Roger Hitchings (Chair) Carlo E. Traverso (Vice Chair) Franz Grehn Anders Heijl John Thygesen Fotis Topouzis Thierry Zeyen The EGS Committees CME and Certification Gordana Sunaric Mégevand (Chair) Carlo Enrico Traverso (Co-chair) Delivery of Care Anton Hommer (Chair) EU Action Thierry Zeyen (Chair) Carlo E. Traverso (Co-chair) Education John Thygesen (Chair) Fotis Topouzis (Co-chair) Glaucogene Ananth Viswanathan (Chair) Fotis Topouzis (Co-chair) Industry Liaison Roger Hitchings (Chair) Information Technology Ingeborg Stalmans (Chair) Carlo E. Traverso (Co-chair) National Society Liaison Anders Heijl (Chair) Program Planning Fotis Topouzis (Chair) Ingeborg Stalmans (Co-chair) Quality and Outcomes Anja Tuulonen (Chair) Augusto Azuara Blanco (Co-chair) Scientific Franz Grehn (Chair) David Garway Heath (Co-chair) PMID:27378485
KSC Tech Transfer News, Volume 2, No. 2
NASA Technical Reports Server (NTRS)
Makufka, David (Editor); Dunn, Carol (Editor)
2009-01-01
This issue contains articles about: (1) the Innovative Partnerships Program (IPP) and the manager of the program, Alexis Hongamen, (2) New Technology Report (NTR) on a Monte Carlo Simulation to Estimate the Likelihood of Direct Lightning Strikes, (3) Kennedy Space Center's Applied Physics Lab, (4) a virtual ruler that is used for many applications, (5) a portable device that finds low-level leaks, (6) a sun-shield, that supports in-space cryogenic propellant storage, (7) lunar dust modeling software, (8) space based monitoring of radiation damage to DNA, (9) the use of light-emitting diode (LED) arrays vegetable production system, (10) Dust Tolerant Intelligent Electrical Connection Systems, (11) Ice Detection Camera System Upgrade, (12) Repair Techniques for Composite Structures, (13) Cryogenic Orbital Testbed, and (14) copyright protection.
ProMC: Input-output data format for HEP applications using varint encoding
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; May, E.; Strand, K.; Van Gemmeren, P.
2014-10-01
A new data format for Monte Carlo (MC) events, or any structural data, including experimental data, is discussed. The format is designed to store data in a compact binary form using variable-size integer encoding as implemented in the Google's Protocol Buffers package. This approach is implemented in the PROMC library which produces smaller file sizes for MC records compared to the existing input-output libraries used in high-energy physics (HEP). Other important features of the proposed format are a separation of abstract data layouts from concrete programming implementations, self-description and random access. Data stored in PROMC files can be written, read and manipulated in a number of programming languages, such C++, JAVA, FORTRAN and PYTHON.
Bayesian analysis of non-homogeneous Markov chains: application to mental health data.
Sung, Minje; Soyer, Refik; Nhan, Nguyen
2007-07-10
In this paper we present a formal treatment of non-homogeneous Markov chains by introducing a hierarchical Bayesian framework. Our work is motivated by the analysis of correlated categorical data which arise in assessment of psychiatric treatment programs. In our development, we introduce a Markovian structure to describe the non-homogeneity of transition patterns. In doing so, we introduce a logistic regression set-up for Markov chains and incorporate covariates in our model. We present a Bayesian model using Markov chain Monte Carlo methods and develop inference procedures to address issues encountered in the analyses of data from psychiatric treatment programs. Our model and inference procedures are implemented to some real data from a psychiatric treatment study. Copyright 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Carrete, Jesús; Vermeersch, Bjorn; Katre, Ankita; van Roekeghem, Ambroise; Wang, Tao; Madsen, Georg K. H.; Mingo, Natalio
2017-11-01
almaBTE is a software package that solves the space- and time-dependent Boltzmann transport equation for phonons, using only ab-initio calculated quantities as inputs. The program can predictively tackle phonon transport in bulk crystals and alloys, thin films, superlattices, and multiscale structures with size features in the nm- μm range. Among many other quantities, the program can output thermal conductances and effective thermal conductivities, space-resolved average temperature profiles, and heat-current distributions resolved in frequency and space. Its first-principles character makes almaBTE especially well suited to investigate novel materials and structures. This article gives an overview of the program structure and presents illustrative examples for some of its uses. PROGRAM SUMMARY Program Title:almaBTE Program Files doi:http://dx.doi.org/10.17632/8tfzwgtp73.1 Licensing provisions: Apache License, version 2.0 Programming language: C++ External routines/libraries: BOOST, MPI, Eigen, HDF5, spglib Nature of problem: Calculation of temperature profiles, thermal flux distributions and effective thermal conductivities in structured systems where heat is carried by phonons Solution method: Solution of linearized phonon Boltzmann transport equation, Variance-reduced Monte Carlo
NASA Astrophysics Data System (ADS)
Vienhage, Paul; Barcomb, Heather; Marshall, Karel; Black, William A.; Coons, Amanda; Tran, Hien T.; Nguyen, Tien M.; Guillen, Andy T.; Yoh, James; Kizer, Justin; Rogers, Blake A.
2017-05-01
The paper describes the MATLAB (MathWorks) programs that were developed during the REU workshop1 to implement The Aerospace Corporation developed Unified Game-based Acquisition Framework and Advanced Game - based Mathematical Framework (UGAF-AGMF) and its associated War-Gaming Engine (WGE) models. Each game can be played from the perspectives of the Department of Defense Acquisition Authority (DAA) or of an individual contractor (KTR). The programs also implement Aerospace's optimum "Program and Technical Baseline (PTB) and associated acquisition" strategy that combines low Total Ownership Cost (TOC) with innovative designs while still meeting warfighter needs. The paper also describes the Bayesian Acquisition War-Gaming approach using Monte Carlo simulations, a numerical analysis technique to account for uncertainty in decision making, which simulate the PTB development and acquisition processes and will detail the procedure of the implementation and the interactions between the games.
NASA Astrophysics Data System (ADS)
Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi
2018-03-01
Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less
2013-07-01
also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32 B. MCNP PHYSICS OPTIONS ......................................................................................... 33 C. HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon
3. Photographic copy of map. San Carlos Project, Arizona. Irrigation ...
3. Photographic copy of map. San Carlos Project, Arizona. Irrigation System. Department of the Interior. United States Indian Service. No date. Circa 1939. (Source: Henderson, Paul. U.S. Indian Irrigation Service. Supplemental Storage Reservoir, Gila River. November 10, 1939, RG 115, San Carlos Project, National Archives, Rocky Mountain Region, Denver, CO.) - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ
Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul
2011-01-01
Recently there have been many studies of power systems with a focus on “New and Renewable Energy” as part of “New Growth Engine Industry” promoted by the Korean government. “New And Renewable Energy”—especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels—is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI’s IntelliGrid research program. The European Union (EU), which represents Europe’s Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation. PMID:22164047
NASA Astrophysics Data System (ADS)
Keen, A. S.; Lynett, P. J.; Ayca, A.
2016-12-01
Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. The talk will outline an assessment tool which can be used to assess the tsunami hazard to small craft harbors. The methodology is based on the demand and structural capacity of the floating dock system, composed of floating docks/fingers and moored vessels. The structural demand is determined using a Monte Carlo methodology. Monte Carlo methodology is a probabilistic computational tool where the governing might be well known, but the independent variables of the input (demand) as well as the resisting structural components (capacity) may not be completely known. The Monte Carlo approach uses a distribution of each variable, and then uses that random variable within the described parameters, to generate a single computation. The process then repeats hundreds or thousands of times. The numerical model "Method of Splitting Tsunamis" (MOST) has been used to determine the inputs for the small craft harbors within California. Hydrodynamic model results of current speed, direction and surface elevation were incorporated via the drag equations to provide the bases of the demand term. To determine the capacities, an inspection program was developed to identify common features of structural components. A total of six harbors have been inspected ranging from Crescent City in Northern California to Oceanside Harbor in Southern California. Results from the inspection program were used to develop component capacity tables which incorporated the basic specifications of each component (e.g. bolt size and configuration) and a reduction factor (which accounts for the component reduction in capacity with age) to estimate in situ capacities. Like the demand term, these capacities are added probabilistically into the model. To date the model has been applied to Santa Cruz Harbor as well as Noyo River. Once calibrated, the model was able to hindcast the damage produced in Santa Cruz Harbor during the 2010 Chile and 2011 Japan events. Results of the Santa Cruz analysis will be presented and discussed.
Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1975-01-01
Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Liquid Water from First Principles: Validation of Different Sampling Approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundy, C J; Kuo, W; Siepmann, J
2004-05-20
A series of first principles molecular dynamics and Monte Carlo simulations were carried out for liquid water to assess the validity and reproducibility of different sampling approaches. These simulations include Car-Parrinello molecular dynamics simulations using the program CPMD with different values of the fictitious electron mass in the microcanonical and canonical ensembles, Born-Oppenheimer molecular dynamics using the programs CPMD and CP2K in the microcanonical ensemble, and Metropolis Monte Carlo using CP2K in the canonical ensemble. With the exception of one simulation for 128 water molecules, all other simulations were carried out for systems consisting of 64 molecules. It is foundmore » that the structural and thermodynamic properties of these simulations are in excellent agreement with each other as long as adiabatic sampling is maintained in the Car-Parrinello molecular dynamics simulations either by choosing a sufficiently small fictitious mass in the microcanonical ensemble or by Nos{acute e}-Hoover thermostats in the canonical ensemble. Using the Becke-Lee-Yang-Parr exchange and correlation energy functionals and norm-conserving Troullier-Martins or Goedecker-Teter-Hutter pseudopotentials, simulations at a fixed density of 1.0 g/cm{sup 3} and a temperature close to 315 K yield a height of the first peak in the oxygen-oxygen radial distribution function of about 3.0, a classical constant-volume heat capacity of about 70 J K{sup -1} mol{sup -1}, and a self-diffusion constant of about 0.1 Angstroms{sup 2}/ps.« less
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauza, Oriol Salto
This Ph.D. thesis presents the measurement of inclusive jet cross sections in Z/γ*→ e +e - events using 1.7 fb -1 of data collected by the upgraded CDF detector during the Run II of the Tevatron. The Midpoint cone algorithm is used to search for jets in the events after identifying the presence of a Z/γ* boson through the reconstruction of its decay products. The measurements are compared to next-to-LO (NLO) pQCD predictions for events with one and two jets in the final state. The perturbative predictions are corrected for the contributions of non-perturbative processes, like the underlying event andmore » the fragmentation of the partons into jets of hadrons. These processes are not described by perturbation theory and must be estimated from phenomenological models. In this thesis, a number of measurements are performed to test different models of underlying event and hadronization implemented in LO plus parton shower Monte Carlo generator programs. Chapter 2 is devoted to the description of the theory of strong interactions and jet phenomenology at hadron colliders. Chapter 3 contains the description of the Tevatron collider and the CDF detector. The analysis is described in detail in Chapter 4. Chapter 5 shows the measurement of those observables sensitive to non-perturbative effects compared to the predictions from several Monte Carlo programs. Chapter 6 discusses the final results and the comparison with theoretical expectations. Finally, Chapter 7 is devoted to the conclusions.« less
Fixed-node quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Anderson, James B.
Quantum Monte Carlo methods cannot at present provide exact solutions of the Schrödinger equation for systems with more than a few electrons. But, quantum Monte Carlo calculations can provide very low energy, highly accurate solutions for many systems ranging up to several hundred electrons. These systems include atoms such as Be and Fe, molecules such as H2O, CH4, and HF, and condensed materials such as solid N2 and solid silicon. The quantum Monte Carlo predictions of their energies and structures may not be `exact', but they are the best available. Most of the Monte Carlo calculations for these systems have been carried out using approximately correct fixed nodal hypersurfaces and they have come to be known as `fixed-node quantum Monte Carlo' calculations. In this paper we review these `fixed node' calculations and the accuracies they yield.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Markov chain Monte Carlo linkage analysis: effect of bin width on the probability of linkage.
Slager, S L; Juo, S H; Durner, M; Hodge, S E
2001-01-01
We analyzed part of the Genetic Analysis Workshop (GAW) 12 simulated data using Monte Carlo Markov chain (MCMC) methods that are implemented in the computer program Loki. The MCMC method reports the "probability of linkage" (PL) across the chromosomal regions of interest. The point of maximum PL can then be taken as a "location estimate" for the location of the quantitative trait locus (QTL). However, Loki does not provide a formal statistical test of linkage. In this paper, we explore how the bin width used in the calculations affects the max PL and the location estimate. We analyzed age at onset (AO) and quantitative trait number 5, Q5, from 26 replicates of the general simulated data in one region where we knew a major gene, MG5, is located. For each trait, we found the max PL and the corresponding location estimate, using four different bin widths. We found that bin width, as expected, does affect the max PL and the location estimate, and we recommend that users of Loki explore how their results vary with different bin widths.
NASA Technical Reports Server (NTRS)
Wilson, Thomas L.; Pinsky, Lawrence; Andersen, Victor; Empl, Anton; Lee, Kerry; Smirmov, Georgi; Zapp, Neal; Ferrari, Alfredo; Tsoulou, Katerina; Roesler, Stefan;
2005-01-01
Simulating the Space Radiation environment with Monte Carlo Codes, such as FLUKA, requires the ability to model the interactions of heavy ions as they penetrate spacecraft and crew member's bodies. Monte-Carlo-type transport codes use total interaction cross sections to determine probabilistically when a particular type of interaction has occurred. Then, at that point, a distinct event generator is employed to determine separately the results of that interaction. The space radiation environment contains a full spectrum of radiation types, including relativistic nuclei, which are the most important component for the evaluation of crew doses. Interactions between incident protons with target nuclei in the spacecraft materials and crew member's bodies are well understood. However, the situation is substantially less comfortable for incident heavier nuclei (heavy ions). We have been engaged in developing several related heavy ion interaction models based on a Quantum Molecular Dynamics-type approach for energies up through about 5 GeV per nucleon (GeV/A) as part of a NASA Consortium that includes a parallel program of cross section measurements to guide and verify this code development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles A. Wemple; Joshua J. Cogliati
2005-04-01
A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random numbermore » generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.« less
Modeling leaching of viruses by the Monte Carlo method.
Faulkner, Barton R; Lyon, William G; Khan, Faruque A; Chattopadhyay, Sandip
2003-11-01
A predictive screening model was developed for fate and transport of viruses in the unsaturated zone by applying the final value theorem of Laplace transformation to previously developed governing equations. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very small, but finite probabilities of failure for all homogeneous USDA classified soils to attenuate reovirus 3 by 99.99% in one-half meter of gravity drainage. The logarithm of saturated hydraulic conductivity and water to air-water interface mass transfer coefficient affected virus fate and transport about 3 times more than any other parameter, including the logarithm of inactivation rate of suspended viruses. Model results suggest extreme infiltration events may play a predominant role in leaching of viruses in soils, since such events could impact hydraulic conductivity. The air-water interface also appears to play a predominating role in virus transport and fate. Although predictive modeling may provide insight into actual attenuation of viruses, hydrogeologic sensitivity assessments for the unsaturated zone should include a sampling program.
NASA Astrophysics Data System (ADS)
Mazzaracchio, Antonio; Marchetti, Mario
2010-03-01
Implicit ablation and thermal response software was developed to analyse and size charring ablative thermal protection systems for entry vehicles. A statistical monitor integrated into the tool, which uses the Monte Carlo technique, allows a simulation to run over stochastic series. This performs an uncertainty and sensitivity analysis, which estimates the probability of maintaining the temperature of the underlying material within specified requirements. This approach and the associated software are primarily helpful during the preliminary design phases of spacecraft thermal protection systems. They are proposed as an alternative to traditional approaches, such as the Root-Sum-Square method. The developed tool was verified by comparing the results with those from previous work on thermal protection system probabilistic sizing methodologies, which are based on an industry standard high-fidelity ablation and thermal response program. New case studies were analysed to establish thickness margins on sizing heat shields that are currently proposed for vehicles using rigid aeroshells for future aerocapture missions at Neptune, and identifying the major sources of uncertainty in the material response.
Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System
NASA Astrophysics Data System (ADS)
Bethmann, F.; Luhmann, T.
2012-07-01
The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.
NASA Astrophysics Data System (ADS)
Marashdeh, Mohammad W.; Al-Hamarneh, Ibrahim F.; Abdel Munem, Eid M.; Tajuddin, A. A.; Ariffin, Alawiah; Al-Omari, Saleh
Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10-60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Designing new guides and instruments using McStas
NASA Astrophysics Data System (ADS)
Farhi, E.; Hansen, T.; Wildes, A.; Ghosh, R.; Lefmann, K.
With the increasing complexity of modern neutron-scattering instruments, the need for powerful tools to optimize their geometry and physical performances (flux, resolution, divergence, etc.) has become essential. As the usual analytical methods reach their limit of validity in the description of fine effects, the use of Monte Carlo simulations, which can handle these latter, has become widespread. The McStas program was developed at Riso National Laboratory in order to provide neutron scattering instrument scientists with an efficient and flexible tool for building Monte Carlo simulations of guides, neutron optics and instruments [1]. To date, the McStas package has been extensively used at the Institut Laue-Langevin, Grenoble, France, for various studies including cold and thermal guides with ballistic geometry, diffractometers, triple-axis, backscattering and time-of-flight spectrometers [2]. In this paper, we present some simulation results concerning different guide geometries that may be used in the future at the Institut Laue-Langevin. Gain factors ranging from two to five may be obtained for the integrated intensities, depending on the exact geometry, the guide coatings and the source.
Status of the Space Radiation Monte Carlos Simulation Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Andersen, Victor; Carminati, Federico; Empl, Anton; Ferrari, Alfredo; Pinsky, Lawrence; Sala, Paola; Wilson, Thomas L.
2002-01-01
The NASA-funded project reported on at the first IWSSRR in Arona to develop a Monte-Carlo simulation program for use in simulating the space radiation environment based on the FLUKA and ROOT codes is well into its second year of development, and considerable progress has been made. The general tasks required to achieve the final goals include the addition of heavy-ion interactions into the FLUKA code and the provision of a ROOT-based interface to FLUKA. The most significant progress to date includes the incorporation of the DPMJET event generator code within FLUKA to handle heavy-ion interactions for incident projectile energies greater than 3GeV/A. The ongoing effort intends to extend the treatment of these interactions down to 10 MeV, and at present two alternative approaches are being explored. The ROOT interface is being pursued in conjunction with the CERN LHC ALICE software team through an adaptation of their existing AliROOT software. As a check on the validity of the code, a simulation of the recent data taken by the ATIC experiment is underway.
Prompt Radiation Protection Factors
2018-02-01
dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat
Monte Carlo modeling of spatial coherence: free-space diffraction
Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.
2008-01-01
We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
NASA Astrophysics Data System (ADS)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)
CGRO Guest Investigator Program
NASA Technical Reports Server (NTRS)
Begelman, Mitchell C.
1997-01-01
The following are highlights from the research supported by this grant: (1) Theory of gamma-ray blazars: We studied the theory of gamma-ray blazars, being among the first investigators to propose that the GeV emission arises from Comptonization of diffuse radiation surrounding the jet, rather than from the synchrotron-self-Compton mechanism. In related work, we uncovered possible connections between the mechanisms of gamma-ray blazars and those of intraday radio variability, and have conducted a general study of the role of Compton radiation drag on the dynamics of relativistic jets. (2) A Nonlinear Monte Carlo code for gamma-ray spectrum formation: We developed, tested, and applied the first Nonlinear Monte Carlo (NLMC) code for simulating gamma-ray production and transfer under much more general (and realistic) conditions than are accessible with other techniques. The present version of the code is designed to simulate conditions thought to be present in active galactic nuclei and certain types of X-ray binaries, and includes the physics needed to model thermal and nonthermal electron-positron pair cascades. Unlike traditional Monte-Carlo techniques, our method can accurately handle highly non-linear systems in which the radiation and particle backgrounds must be determined self-consistently and in which the particle energies span many orders of magnitude. Unlike models based on kinetic equations, our code can handle arbitrary source geometries and relativistic kinematic effects In its first important application following testing, we showed that popular semi-analytic accretion disk corona models for Seyfert spectra are seriously in error, and demonstrated how the spectra can be simulated if the disk is sparsely covered by localized 'flares'.
Transient thermal modeling of the nonscanning ERBE detector
NASA Technical Reports Server (NTRS)
Mahan, J. R.
1983-01-01
A numerical model to predict the transient thermal response of the ERBE nonscanning wide field of view total radiometer channel was developed. The model, which uses Monte Carlo techniques to characterize the radiative component of heat transfer, is described and a listing of the computer program is provided. Application of the model to simulate the actual blackbody calibration procedure is discussed. The use of the model to establish a real time flight data interpretation strategy is recommended. Modification of the model to include a simulated Earth radiation source field and a filter dome is indicated.
NASA Astrophysics Data System (ADS)
Morozov, A.; Krücken, R.; Ulrich, A.; Wieser, J.
2006-11-01
Side-view intensity profiles of fluorescent light were measured for neon and nitrogen excited with 12keV electron beams at gas pressures from 250to1400hPa. The intensity profiles were compared with theoretical profiles calculated using the CASINO program which performs Monte Carlo simulations of electron scattering. It was assumed that the spatial distribution of fluorescent intensity is directly proportional to the spatial distribution of energy loss by primary electrons. The comparison shows good correlation of experimental data and the results of numeric simulations.
The GlueX central drift chamber: Design and performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Haarlem, Y; Barbosa, F; Dey, B
2010-10-01
Tests and studies concerning the design and performance of the GlueX Central Drift Chamber (CDC) are presented. A full-scale prototype was built to test and steer the mechanical and electronic design. Small scale prototypes were constructed to test for sagging and to do timing and resolution studies of the detector. These studies were used to choose the gas mixture and to program a Monte Carlo simulation that can predict the detector response in an external magnetic field. Particle identification and charge division possibilities were also investigated.
Model Comparisons For Space Solar Cell End-Of-Life Calculations
NASA Astrophysics Data System (ADS)
Messenger, Scott; Jackson, Eric; Warner, Jeffrey; Walters, Robert; Evans, Hugh; Heynderickx, Daniel
2011-10-01
Space solar cell end-of-life (EOL) calculations are performed over a wide range of space radiation environments for GaAs-based single and multijunction solar cell technologies. Two general semi-empirical approaches will used to generate these EOL calculation results: 1) the JPL equivalent fluence (EQFLUX) and 2) the NRL displacement damage dose (SCREAM). This paper also includes the first results using the Monte Carlo-based version of SCREAM, called MC- SCREAM, which is now freely available online as part of the SPENVIS suite of programs.
Quantum Gibbs ensemble Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
International Program and Local Organizing Committees
NASA Astrophysics Data System (ADS)
2012-12-01
International Program Committee Dionisio Bermejo (Spain) Roman Ciurylo (Poland) Elisabeth Dalimier (France) Alexander Devdariani (Russia) Milan S Dimitrijevic (Serbia) Robert Gamache (USA) Marco A Gigosos (Spain) Motoshi Goto (Japan) Magnus Gustafsson (Sweden) Jean-Michel Hartmann (France) Carlos Iglesias (USA) John Kielkopf (USA) John C Lewis (Canada) Valery Lisitsa (Russia) Eugene Oks (USA) Christian G Parigger (USA) Gillian Peach (UK) Adriana Predoi-Cross (Canada) Roland Stamm (Germany) Local Organizing Committee Nikolay G Skvortsov (Chair, St Petersburg State University) Evgenii B Aleksandrov (Ioffe Physico-Technical Institute, St Petersburg) Vadim A Alekseev (Scientific Secretary, St Petersburg State University) Sergey F Boureiko (St.Petersburg State University) Yury N Gnedin (Pulkovo Observatory, St Petersburg) Alexander Z Devdariani (Deputy Chair, St Petersburg State University) Alexander P Kouzov (Deputy Chair, St Petersburg State University) Nikolay A Timofeev (St Petersburg State University)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...
75 FR 53332 - San Carlos Irrigation Project, Arizona
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... DEPARTMENT OF THE INTERIOR Bureau of Reclamation San Carlos Irrigation Project, Arizona AGENCY..., as amended, on the rehabilitation of San Carlos Irrigation Project (SCIP) water delivery facilities... convey irrigation water from the Gila River and Central Arizona Project (CAP) to agricultural lands in...
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.
2013-08-01
We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
Monte Carlo Transport for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
[The cultural psychiatry in Latin America].
Villaseñor-Bayardo, Sergio J; Rojas-Malpica, Carlos; Aceves-Pulido, Martha P
2014-01-01
This paper presents only some of the most important contributions in the development of cultural psychiatry in Latin America. The continental efforts to understand the role that culture plays in the manifestation and treatment of mental disorders have been fruitful. The authors included are: Fernando Pagés of Argentina; Mario G. Hollweg of Bolivia; Rubim Alvaro de Pinho and Adalberto Barreto of Brazil; Carlos A. Leon and Carlos A. Uribe of Colombia; Antonio José A. Bustamante and Santa Cruz de Cuba, Carlos Leon Andrade of Ecuador, Guatemala Cristina Chavez; Sergio Villasenor J. Bayardo of Mexico; Carlos A. Seguin, Hermilio Valdizán and Javier Mariátegui in Peru; Y. Bespaldi of Consens of Uruguay; Rojas and Carlos Malpica and Jacqueline Briceño Clarac of Venezuela.
Advanced Computational Methods for Monte Carlo Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
NASA Astrophysics Data System (ADS)
Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin
2017-07-01
The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.
Patti, Alessandro; Cuetos, Alejandro
2012-07-01
We report on the diffusion of purely repulsive and freely rotating colloidal rods in the isotropic, nematic, and smectic liquid crystal phases to probe the agreement between Brownian and Monte Carlo dynamics under the most general conditions. By properly rescaling the Monte Carlo time step, being related to any elementary move via the corresponding self-diffusion coefficient, with the acceptance rate of simultaneous trial displacements and rotations, we demonstrate the existence of a unique Monte Carlo time scale that allows for a direct comparison between Monte Carlo and Brownian dynamics simulations. To estimate the validity of our theoretical approach, we compare the mean square displacement of rods, their orientational autocorrelation function, and the self-intermediate scattering function, as obtained from Brownian dynamics and Monte Carlo simulations. The agreement between the results of these two approaches, even under the condition of heterogeneous dynamics generally observed in liquid crystalline phases, is excellent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less
Summarizing Monte Carlo Results in Methodological Research.
ERIC Educational Resources Information Center
Harwell, Michael R.
Monte Carlo studies of statistical tests are prominently featured in the methodological research literature. Unfortunately, the information from these studies does not appear to have significantly influenced methodological practice in educational and psychological research. One reason is that Monte Carlo studies lack an overarching theory to guide…
New Approaches and Applications for Monte Carlo Perturbation Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
NASA Astrophysics Data System (ADS)
Mishra, Bhavya; Schütz, Gunter M.; Chowdhury, Debashish
2016-06-01
We develop a stochastic model for the programmed frameshift of ribosomes synthesizing a protein while moving along a mRNA template. Normally the reading frame of a ribosome decodes successive triplets of nucleotides on the mRNA in a step-by-step manner. We focus on the programmed shift of the ribosomal reading frame, forward or backward, by only one nucleotide which results in a fusion protein; it occurs when a ribosome temporarily loses its grip to its mRNA track. Special “slippery” sequences of nucleotides and also downstream secondary structures of the mRNA strand are believed to play key roles in programmed frameshift. Here we explore the role of an hitherto neglected parameter in regulating -1 programmed frameshift. Specifically, we demonstrate that the frameshift frequency can be strongly regulated also by the density of the ribosomes, all of which are engaged in simultaneous translation of the same mRNA, at and around the slippery sequence. Monte Carlo simulations support the analytical predictions obtained from a mean-field analysis of the stochastic dynamics.
SpekCalc: a program to calculate photon spectra from tungsten anode x-ray tubes.
Poludniowski, G; Landry, G; DeBlois, F; Evans, P M; Verhaegen, F
2009-10-07
A software program, SpekCalc, is presented for the calculation of x-ray spectra from tungsten anode x-ray tubes. SpekCalc was designed primarily for use in a medical physics context, for both research and education purposes, but may also be of interest to those working with x-ray tubes in industry. Noteworthy is the particularly wide range of tube potentials (40-300 kVp) and anode angles (recommended: 6-30 degrees) that can be modelled: the program is therefore potentially of use to those working in superficial/orthovoltage radiotherapy, as well as diagnostic radiology. The utility is free to download and is based on a deterministic model of x-ray spectrum generation (Poludniowski 2007 Med. Phys. 34 2175). Filtration can be applied for seven materials (air, water, Be, Al, Cu, Sn and W). In this note SpekCalc is described and illustrative examples are shown. Predictions are compared to those of a state-of-the-art Monte Carlo code (BEAMnrc) and, where possible, to an alternative, widely-used, spectrum calculation program (IPEM78).
The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012
NASA Astrophysics Data System (ADS)
Keen, David A.; Pusztai, László
2013-11-01
This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since the method is akin to a structural refinement method (albeit with some inbuilt Monte Carlo 'randomness'), high-quality data are needed to yield the best structural models. In this regard it is particularly pleasing to see the continued (planned and actual) growth in diffractometers at neutron and synchrotron x-ray facilities that have been designed with total scattering in mind. Since the previous RMC workshop in 2009 [2] (and indeed the earlier workshops in 2006 [3] and 2003 [4]) there have been several developments in the technique and in the range of its application. It is good to see that the program RMCProfile [5] is now being used as a refinement tool in a wide range of crystalline materials spanning molecular crystals, proton conductors and spinels. This is one of the growth areas in recent years; crystalline supercell models are constructed by replicating the average unit cell contents and the atoms are then relaxed using the RMC method to fit the data, while maintaining appropriate atom connectivity. This refined supercell is then investigated to determine how the average structure has changed to accommodate defects, local distortions, molecular rotations etc. There have also been technical developments to enhance the total scattering and RMC method as seen in the papers on new ways to process x-ray total scattering data, on the analysis of molecular liquid structures using a new version of RMC_POT [6], the program SPINVERT for determining disordered magnetic structures from magnetic diffuse scattering and papers concerned with other algorithmic improvements. These are all in addition to some excellent papers on the structures of amorphous materials, liquids and solutions using more established RMC methods. Many of the papers have been written by RMC workshop participants. We are pleased that the workshop enabled students and other young researchers to gain a deeper understanding of the RMC method at the start of their scientific careers. It is our hope that the collection of research papers within this special issue will communicate the vibrancy of this field to the wider scientific community by showing the current 'state of the art' research opportunities using the RMC method. Furthermore, by including a small number of papers from colleagues working on similar disordered problems with complementary analysis techniques, we hope that the RMC method may be placed in a broader scientific context. Acknowledgments We are very grateful to IOP Publishing for their willingness to publish this collection of papers which celebrates the 24th anniversary of the first RMC publication in a special issue of Journal of Physics: Condensed Matter and for their co-ordination during the refereeing process. References [1] McGreevy R L 2001 J. Phys.: Condens. Matter 13 R877 [2] Keen D A and Pusztai L (ed) 2010 J. Phys.: Condens. Matter 22 (Special issue on the first 21 years of reverse Monte Carlo modelling) [3] Keen D A and Pusztai L (ed) 2007 J. Phys.: Condens. Matter 19 (Special issue on the first eighteen years of reverse Monte Carlo modelling) [4] Keen D A, Pusztai L and Dove M T (ed) 2005 J. Phys.: Condens. Matter 17 (Special issue on the first fifteen years of reverse Monte Carlo modelling) [5] Tucker M G, Keen D A, Dove M T, Goodwin A L and Hui Q 2007 J. Phys.: Condens. Matter 19 335218 [6] Gereben O and Pusztai L 2012 J. Comput. Chem. 33 2285 Reverse Monte Carlo modelling The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012David A Keen and László Pusztai Conformational analysis of bis(methylthio)methane and diethyl sulfide molecules in the liquid phase: reverse Monte Carlo studies using classical interatomic potential functionsOrsolya Gereben and László Pusztai Towards a robust ad hoc data correction approach that yields reliable atomic pair distribution functions from powder diffraction dataSimon J L Billinge and Christopher L Farrow The atomic scale structure of CXV carbon: wide-angle x-ray scattering and modeling studiesL Hawelek, A Brodka, J C Dore, V Honkimaki and A Burian Local structure correlations in plastic cyclohexane—a reverse Monte Carlo studyNicholas P Funnell, Martin T Dove, Andrew L Goodwin, Simon Parsons and Matthew G Tucker Neutron powder diffraction and molecular dynamics study of superionic SrBr2S Hull, S T Norberg, S G Eriksson and C E Mohn Atomic order and cluster energetics of a 17 wt% Si-based glass versus the liquid phaseG S E Antipas, L Temleitner, K Karalis, L Pusztai and A Xenidis Total scattering analysis of cation coordination and vacancy pair distribution in Yb substituted Ō-Bi2O3G S E Antipas, L Temleitner, K Karalis, L Pusztai and A Xenidis Modification of the sampling algorithm for reverse Monte Carlo modeling with an insufficient data setSatoshi Sato and Kenji Maruyama The origin of diffuse scattering in crystalline carbon tetraiodideTemleitner and L Pusztai Silver environment and covalent network rearrangement in GeS3-Ag glassesL Rátkai, I Kaban, T Wágner, J Kolár, S Valková, Iva Voleská, B Beuneu and P Jóvári Reverse Monte Carlo study of spherical sample under non-periodic boundary conditions: the structure of Ru nanoparticles based on x-ray diffraction dataOrsolya Gereben and Valeri Petkov Total neutron scattering investigation of the structure of a cobalt gallium oxide spinel prepared by solvothermal oxidation of gallium metalHelen Y Playford, Alex C Hannon, Matthew G Tucker, Martin R Lees and Richard I Walton The structure of water in solutions containing di- and trivalent cations by empirical potential structure refinementDaniel T Bowron and Sofia Díaz Moreno The proton conducting electrolyte BaTi0.5In0.5O2.75: determination of the deuteron site and its local environmentStefan T Norberg, Seikh M H Rahman, Stephen Hull, Christopher S Knee and Sten G Eriksson Acidic properties of aqueous phosphoric acid solutions: a microscopic viewI Harsányi, L Pusztai, P Jóvári and B Beuneu Comparison of the atomic level structure of the plastic crystalline and liquid phases of CBr2Cl2: neutron diffraction and reverse Monte Carlo modellingSzilvia Pothoczki1, László Temleitner, Luis Carlos Pardo, Gabriel Julio Cuello, Muriel Rovira-Esteva and Josep Lluis Tamarit Insights into the determination of molecular structure from diffraction data using a Bayesian algorithmA Henao, M Rovira-Esteva, A Vispa, J Ll Tamarit, E Guardia and L C Pardo Nanostructure determination from the pair distribution function: a parametric study of the INVERT approachMatthew J Cliffe and Andrew L Goodwin Empirical potential structure refinement of semi-crystalline polymer systems: polytetrafluoroethylene and polychlorotrifluoroethyleneA K Soper, K Page and A Llobet spinvert: a program for refinement of paramagnetic diffuse scattering dataJoseph A M Paddison, J Ross Stewart and Andrew L Goodwin Inter-molecular correlations in liquid Se2Br2Hironori Shimakura, Yukinobu Kawakita, Koji Ohara, László Pusztai, Yuiko Wakisaka and Shin'ichi Takeda RMCgui: a new interface for the workflow associated with running Reverse Monte Carlo simulationsMartin T Dove and Gary Rigg
A Monte Carlo Simulation of Brownian Motion in the Freshman Laboratory
ERIC Educational Resources Information Center
Anger, C. D.; Prescott, J. R.
1970-01-01
Describes a dry- lab" experiment for the college freshman laboratory, in which the essential features of Browian motion are given principles, using the Monte Carlo technique. Calculations principles, using the Monte Carlo technique. Calculations are carried out by a computation sheme based on computer language. Bibliography. (LC)
Gnuastro: GNU Astronomy Utilities
NASA Astrophysics Data System (ADS)
Akhlaghi, Mohammad
2018-01-01
Gnuastro (GNU Astronomy Utilities) manipulates and analyzes astronomical data. It is an official GNU package of a large collection of programs and C/C++ library functions. Command-line programs perform arithmetic operations on images, convert FITS images to common types like JPG or PDF, convolve an image with a given kernel or matching of kernels, perform cosmological calculations, crop parts of large images (possibly in multiple files), manipulate FITS extensions and keywords, and perform statistical operations. In addition, it contains programs to make catalogs from detection maps, add noise, make mock profiles with a variety of radial functions using monte-carlo integration for their centers, match catalogs, and detect objects in an image among many other operations. The command-line programs share the same basic command-line user interface for the comfort of both the users and developers. Gnuastro is written to comply fully with the GNU coding standards and integrates well with all Unix-like operating systems. This enables astronomers to expect a fully familiar experience in the source code, building, installing and command-line user interaction that they have seen in all the other GNU software that they use. Gnuastro's extensive library is included for users who want to build their own unique programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piao, J; PLA 302 Hospital, Beijing; Xu, S
2016-06-15
Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less
Monte Carlo simulations to replace film dosimetry in IMRT verification.
Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig
2011-01-01
Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assurance (QA) phantom. All plans were verified both by film and diode dosimetry and compared to MC simulations. The irradiated films, the results of diode measurements and the computed dose distributions were evaluated, and the data were compared on the basis of gamma maps and dose-difference histograms. Average deviations in the high-dose region between diode measurements and point dose calculations performed with the TPS and MC program were 0.7 ± 2.7% and 1.2 ± 3.1%, respectively. For film measurements, the mean gamma values with 3% dose difference and 3mm distance-to-agreement were 0.74 ± 0.28 (TPS as reference) with dose deviations up to 10%. Corresponding values were significantly reduced to 0.34 ± 0.09 for MC dose calculation. The total time needed for both verification procedures is comparable, however, by far less labor intensive in the case of MC simulations. The presented study showed that independent dose calculation verification of IMRT plans with a fast MC program has the potential to eclipse film dosimetry more and more in the near future. Thus, the linac-specific QA part will necessarily become more important. In combination with MC simulations and due to the simple set-up, point-dose measurements for dosimetric plausibility checks are recommended at least in the IMRT introduction phase. Copyright © 2010. Published by Elsevier GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Z; Vijayan, S; Rana, V
2015-06-15
Purpose: A system was developed that automatically calculates the organ and effective dose for individual fluoroscopically-guided procedures using a log of the clinical exposure parameters. Methods: We have previously developed a dose tracking system (DTS) to provide a real-time color-coded 3D- mapping of skin dose. This software produces a log file of all geometry and exposure parameters for every x-ray pulse during a procedure. The data in the log files is input into PCXMC, a Monte Carlo program that calculates organ and effective dose for projections and exposure parameters set by the user. We developed a MATLAB program to readmore » data from the log files produced by the DTS and to automatically generate the definition files in the format used by PCXMC. The processing is done at the end of a procedure after all exposures are completed. Since there are thousands of exposure pulses with various parameters for fluoroscopy, DA and DSA and at various projections, the data for exposures with similar parameters is grouped prior to entry into PCXMC to reduce the number of Monte Carlo calculations that need to be performed. Results: The software developed automatically transfers data from the DTS log file to PCXMC and runs the program for each grouping of exposure pulses. When the dose from all exposure events are calculated, the doses for each organ and all effective doses are summed to obtain procedure totals. For a complicated interventional procedure, the calculations can be completed on a PC without manual intervention in less than 30 minutes depending on the level of data grouping. Conclusion: This system allows organ dose to be calculated for individual procedures for every patient without tedious calculations or data entry so that estimates of stochastic risk can be obtained in addition to the deterministic risk estimate provided by the DTS. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
EMAM, M; Eldib, A; Lin, M
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systemsmore » (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.« less
HELAC-Onia 2.0: An upgraded matrix-element and event generator for heavy quarkonium physics
NASA Astrophysics Data System (ADS)
Shao, Hua-Sheng
2016-01-01
We present an upgraded version (denoted as version 2.0) of the program HELAC-ONIA for the automated computation of heavy-quarkonium helicity amplitudes within non-relativistic QCD framework. The new code has been designed to include many new and useful features for practical phenomenological simulations. It is designed for job submissions under cluster environment for parallel computations via PYTHON scripts. We have interfaced HELAC-ONIA to the parton shower Monte Carlo programs PYTHIA 8 and QEDPS to take into account the parton-shower effects. Moreover, the decay module guarantees that the program can perform the spin-entangled (cascade-)decay of heavy quarkonium after its generation. We have also implemented a reweighting method to automatically estimate the uncertainties from renormalization and/or factorization scales as well as parton-distribution functions to weighted or unweighted events. A further update is the possibility to generate one-dimensional or two-dimensional plots encoded in the analysis files on the fly. Some dedicated examples are given at the end of the writeup.
NASA Technical Reports Server (NTRS)
Knauber, R. N.
1982-01-01
A FORTRAN coded computer program which computes the capture transient of a launch vehicle upper stage at the ignition and/or separation event is presented. It is for a single degree-of-freedom on-off reaction jet attitude control system. The Monte Carlo method is used to determine the statistical value of key parameters at the outcome of the event. Aerodynamic and booster induced disturbances, vehicle and control system characteristics, and initial conditions are treated as random variables. By appropriate selection of input data pitch, yaw and roll axes can be analyzed. Transient response of a single deterministic case can be computed. The program is currently set up on a CDC CYBER 175 computer system but is compatible with ANSI FORTRAN computer language. This routine has been used over the past fifteen (15) years for the SCOUT Launch Vehicle and has been run on RECOMP III, IBM 7090, IBM 360/370, CDC6600 and CDC CYBER 175 computers with little modification.
2005-12-15
KENNEDY SPACE CENTER, FLA. - At their consoles in the Atlas V Spaceflight Operations Center on Cape Canaveral Air Force Station, members of the New Horizons team take part in a dress rehearsal for the launch scheduled in mid-January. From left are Lockheed Martin's Program Manager John Crocker; Michael Kubiak with the U.S. Air Force, participating with Lockheed Martin on the Education with Industry program; and Lockheed Martin's Carlos Prado. New Horizons carries seven scientific instruments that will characterize the global geology and geomorphology of Pluto and its moon Charon, map their surface compositions and temperatures, and examine Pluto's complex atmosphere. After that, flybys of Kuiper Belt objects from even farther in the solar system may be undertaken in an extended mission. New Horizons is the first mission in NASA's New Frontiers program of medium-class planetary missions. The spacecraft, designed for NASA by the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., will launch aboard a Lockheed Martin Atlas V rocket and fly by Pluto and Charon as early as summer 2015.
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6
2013-01-15
Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less
[Research within the reach of Osakidetza professionals: Primary Health Care Research Program].
Grandes, Gonzalo; Arce, Verónica; Arietaleanizbeaskoa, María Soledad
2014-04-01
To provide information about the process and results of the Primary Health Care Research Program 2010-2011 organised by the Primary Care Research Unit of Bizkaia. Descriptive study. Osakidetza primary care. The 107 health professionals who applied for the program from a total of 4,338 general practitioners, nurses and administrative staff who were informed about it. Application level, research topics classification, program evaluation by participants, projects funding and program costs. Percentage who applied, 2.47%; 95% CI 2.41-2.88%. Of the 28 who were selected and 19 completed. The research topics were mostly related to the more common chronic diseases (32%), and prevention and health promotion (18%). Over 90% of participants assessed the quality of the program as good or excellent, and half of them considered it as difficult or very difficult. Of the18 new projects generated, 12 received funding, with 16 grants, 10 from the Health Department of the Basque Government, 4 from the Carlos III Institute of Health of the Ministry of Health of Spain, and 2 from Kronikgune. A total of €500,000 was obtained for these projects. This program cost €198,327. This experience can be used by others interested in the promotion of research in primary care, as the program achieved its objectives, and was useful and productive. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Program package for multicanonical simulations of U(1) lattice gauge theory-Second version
NASA Astrophysics Data System (ADS)
Bazavov, Alexei; Berg, Bernd A.
2013-03-01
A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.
High-performance parallel computing in the classroom using the public goods game as an example
NASA Astrophysics Data System (ADS)
Perc, Matjaž
2017-07-01
The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.
Constraints on Biogenic Emplacement of Crystalline Calcium Carbonate and Dolomite
NASA Astrophysics Data System (ADS)
Colas, B.; Clark, S. M.; Jacob, D. E.
2015-12-01
Amorphous calcium carbonate (ACC) is a biogenic precursor of calcium carbonates forming shells and skeletons of marine organisms, which are key components of the whole marine environment. Understanding carbonate formation is an essential prerequisite to quantify the effect climate change and pollution have on marine population. Water is a critical component of the structure of ACC and the key component controlling the stability of the amorphous state. Addition of small amounts of magnesium (1-5% of the calcium content) is known to promote the stability of ACC presumably through stabilization of the hydrogen bonding network. Understanding the hydrogen bonding network in ACC is fundamental to understand the stability of ACC. Our approach is to use Monte-Carlo simulations constrained by X-ray and neutron scattering data to determine hydrogen bonding networks in ACC as a function of magnesium doping. We have already successfully developed a synthesis protocol to make ACC, and have collected X-ray data, which is suitable for determining Ca, Mg and O correlations, and have collected neutron data, which gives information on the hydrogen/deuterium (as the interaction of X-rays with hydrogen is too low for us to be able to constrain hydrogen atom positions with only X-rays). The X-ray and neutron data are used to constrain reverse Monte-Carlo modelling of the ACC structure using the Empirical Potential Structure Refinement program, in order to yield a complete structural model for ACC including water molecule positions. We will present details of our sample synthesis and characterization methods, X-ray and neutron scattering data, and reverse Monte-Carlo simulations results, together with a discussion of the role of hydrogen bonding in ACC stability.
MADANALYSIS 5, a user-friendly framework for collider phenomenology
NASA Astrophysics Data System (ADS)
Conte, Eric; Fuks, Benjamin; Serret, Guillaume
2013-01-01
We present MADANALYSIS 5, a new framework for phenomenological investigations at particle colliders. Based on a C++ kernel, this program allows us to efficiently perform, in a straightforward and user-friendly fashion, sophisticated physics analyses of event files such as those generated by a large class of Monte Carlo event generators. MADANALYSIS 5 comes with two modes of running. The first one, easier to handle, uses the strengths of a powerful PYTHON interface in order to implement physics analyses by means of a set of intuitive commands. The second one requires one to implement the analyses in the C++ programming language, directly within the core of the analysis framework. This opens unlimited possibilities concerning the level of complexity which can be reached, being only limited by the programming skills and the originality of the user. Program summaryProgram title: MadAnalysis 5 Catalogue identifier: AENO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Permission to use, copy, modify and distribute this program is granted under the terms of the GNU General Public License. No. of lines in distributed program, including test data, etc.: 31087 No. of bytes in distributed program, including test data, etc.: 399105 Distribution format: tar.gz Programming language: PYTHON, C++. Computer: All platforms on which Python version 2.7, Root version 5.27 and the g++ compiler are available. Compatibility with newer versions of these programs is also ensured. However, the Python version must be below version 3.0. Operating system: Unix, Linux and Mac OS operating systems on which the above-mentioned versions of Python and Root, as well as g++, are available. Classification: 11.1. External routines: ROOT (http://root.cern.ch/drupal/) Nature of problem: Implementing sophisticated phenomenological analyses in high-energy physics through a flexible, efficient and straightforward fashion, starting from event files such as those produced by Monte Carlo event generators. The event files can have been matched or not to parton-showering and can have been processed or not by a (fast) simulation of a detector. According to the sophistication level of the event files (parton-level, hadron-level, reconstructed-level), one must note that several input formats are possible. Solution method: We implement an interface allowing the production of predefined as well as user-defined histograms for a large class of kinematical distributions after applying a set of event selection cuts specified by the user. This therefore allows us to devise robust and novel search strategies for collider experiments, such as those currently running at the Large Hadron Collider at CERN, in a very efficient way. Restrictions: Unsupported event file format. Unusual features: The code is fully based on object representations for events, particles, reconstructed objects and cuts, which facilitates the implementation of an analysis. Running time: It depends on the purposes of the user and on the number of events to process. It varies from a few seconds to the order of the minute for several millions of events.
Monte Carlo Simulation of Visible Light Diffuse Reflection in Neonatal Skin
NASA Astrophysics Data System (ADS)
Atencio, J. A. Delgado; Rodríguez, E. E.; Rodríguez, A. Cornejo; Rivas-Silva, J. F.
2008-04-01
Neonatal jaundice is a medical condition that happens commonly in newborns as result of desbalance between the production and the elimination of the bilirubin. Around 50% of newborns in term and something more of 60% of the near-term becomes jaundiced in the first week of life. This excess of bilirubin in the blood is exhibited in the skin, the sclera of the eyes and the mucous of mouth like a characteristic yellow coloration. In this work we make several numerical simulations of the spectral diffuse reflection for the skin of newborns that present different values of the biological parameters (bilirubin content, grade of pigmentation and content of blood) that characterize it. These simulations will allow us to evaluate the influence of these parameters on the experimental determination of bilirubin by noninvasive optical methods. The simulations are made in the spectral range of 400-700 nm using the Monte Carlo code MCML and two programs developed in LabVIEW by the authors. We simulated the diffuse reflection spectrum of neonatal skin for concentrations of bilirubin in skin that covers an ample range: from physiological to harmful numbers. We considered the influence of factors such as grade of pigmentation and content of blood.
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade
2014-01-01
This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity.
Nuclear analysis of structural damage and nuclear heating on enhanced K-DEMO divertor model
NASA Astrophysics Data System (ADS)
Park, J.; Im, K.; Kwon, S.; Kim, J.; Kim, D.; Woo, M.; Shin, C.
2017-12-01
This paper addresses nuclear analysis on the Korean fusion demonstration reactor (K-DEMO) divertor to estimate the overall trend of nuclear heating values and displacement damages. The K-DEMO divertor model was created and converted by the CAD (Pro-Engineer™) and Monte Carlo automatic modeling programs as a 22.5° sector of the tokamak. The Monte Carlo neutron photon transport and ADVANTG codes were used in this calculation with the FENDL-2.1 nuclear data library. The calculation results indicate that the highest values appeared on the upper outboard target (OT) area, which means the OT is exposed to the highest radiation conditions among the three plasma-facing parts (inboard, central and outboard) in the divertor. Especially, much lower nuclear heating values and displacement damages are indicated on the lower part of the OT area than others. These are important results contributing to thermal-hydraulic and thermo-mechanical analyses on the divertor and also it is expected that the copper alloy materials may be partially used as a heat sink only at the lower part of the OT instead of the reduced activation ferritic-martensitic steel due to copper alloy’s high thermal conductivity.
Pool, René; Heringa, Jaap; Hoefling, Martin; Schulz, Roland; Smith, Jeremy C; Feenstra, K Anton
2012-05-05
We report on a python interface to the GROMACS molecular simulation package, GromPy (available at https://github.com/GromPy). This application programming interface (API) uses the ctypes python module that allows function calls to shared libraries, for example, written in C. To the best of our knowledge, this is the first reported interface to the GROMACS library that uses direct library calls. GromPy can be used for extending the current GROMACS simulation and analysis modes. In this work, we demonstrate that the interface enables hybrid Monte-Carlo/molecular dynamics (MD) simulations in the grand-canonical ensemble, a simulation mode that is currently not implemented in GROMACS. For this application, the interplay between GromPy and GROMACS requires only minor modifications of the GROMACS source code, not affecting the operation, efficiency, and performance of the GROMACS applications. We validate the grand-canonical application against MD in the canonical ensemble by comparison of equations of state. The results of the grand-canonical simulations are in complete agreement with MD in the canonical ensemble. The python overhead of the grand-canonical scheme is only minimal. Copyright © 2012 Wiley Periodicals, Inc.
Maria Jose, Gonzalez Torres; Jürgen, Henniger
2018-01-01
In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017.
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.
2014-01-01
This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle
NASA Technical Reports Server (NTRS)
Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.
2014-01-01
This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.
Differential pencil beam dose computation model for photons.
Mohan, R; Chui, C; Lidofsky, L
1986-01-01
Differential pencil beam (DPB) is defined as the dose distribution relative to the position of the first collision, per unit collision density, for a monoenergetic pencil beam of photons in an infinite homogeneous medium of unit density. We have generated DPB dose distribution tables for a number of photon energies in water using the Monte Carlo method. The three-dimensional (3D) nature of the transport of photons and electrons is automatically incorporated in DPB dose distributions. Dose is computed by evaluating 3D integrals of DPB dose. The DPB dose computation model has been applied to calculate dose distributions for 60Co and accelerator beams. Calculations for the latter are performed using energy spectra generated with the Monte Carlo program. To predict dose distributions near the beam boundaries defined by the collimation system as well as blocks, we utilize the angular distribution of incident photons. Inhomogeneities are taken into account by attenuating the primary photon fluence exponentially utilizing the average total linear attenuation coefficient of intervening tissue, by multiplying photon fluence by the linear attenuation coefficient to yield the number of collisions in the scattering volume, and by scaling the path between the scattering volume element and the computation point by an effective density.
Solid propellant rocket motor internal ballistics performance variation analysis, phase 3
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.; Murph, J. E.; Adams, G. W., Jr.
1977-01-01
Results of research aimed at improving the predictability of off nominal internal ballistics performance of solid propellant rocket motors (SRMs) including thrust imbalance between two SRMs firing in parallel are reported. The potential effects of nozzle throat erosion on internal ballistic performance were studied and a propellant burning rate low postulated. The propellant burning rate model when coupled with the grain deformation model permits an excellent match between theoretical results and test data for the Titan IIIC, TU455.02, and the first Space Shuttle SRM (DM-1). Analysis of star grain deformation using an experimental model and a finite element model shows the star grain deformation effects for the Space Shuttle to be small in comparison to those of the circular perforated grain. An alternative technique was developed for predicting thrust imbalance without recourse to the Monte Carlo computer program. A scaling relationship used to relate theoretical results to test results may be applied to the alternative technique of predicting thrust imbalance or to the Monte Carlo evaluation. Extended investigation into the effect of strain rate on propellant burning rate leads to the conclusion that the thermoelastic effect is generally negligible for both steadily increasing pressure loads and oscillatory loads.
An Approach in Radiation Therapy Treatment Planning: A Fast, GPU-Based Monte Carlo Method.
Karbalaee, Mojtaba; Shahbazi-Gahrouei, Daryoush; Tavakoli, Mohammad B
2017-01-01
An accurate and fast radiation dose calculation is essential for successful radiation radiotherapy. The aim of this study was to implement a new graphic processing unit (GPU) based radiation therapy treatment planning for accurate and fast dose calculation in radiotherapy centers. A program was written for parallel running based on GPU. The code validation was performed by EGSnrc/DOSXYZnrc. Moreover, a semi-automatic, rotary, asymmetric phantom was designed and produced using a bone, the lung, and the soft tissue equivalent materials. All measurements were performed using a Mapcheck dosimeter. The accuracy of the code was validated using the experimental data, which was obtained from the anthropomorphic phantom as the gold standard. The findings showed that, compared with those of DOSXYZnrc in the virtual phantom and for most of the voxels (>95%), <3% dose-difference or 3 mm distance-to-agreement (DTA) was found. Moreover, considering the anthropomorphic phantom, compared to the Mapcheck dose measurements, <5% dose-difference or 5 mm DTA was observed. Fast calculation speed and high accuracy of GPU-based Monte Carlo method in dose calculation may be useful in routine radiation therapy centers as the core and main component of a treatment planning verification system.
Neon ion beam induced pattern formation on amorphous carbon surfaces
NASA Astrophysics Data System (ADS)
Bobes, Omar; Hofsäss, Hans; Zhang, Kun
2018-02-01
We investigate the ripple pattern formation on amorphous carbon surfaces at room temperature during low energy Ne ion irradiation as a function of the ion incidence angle. Monte Carlo simulations of the curvature coefficients applied to the Bradley-Harper and Cater-Vishnyakov models, including the recent extensions by Harrison-Bradley and Hofsäss predict that pattern formation on amorphous carbon thin films should be possible for low energy Ne ions from 250 eV up to 1500 eV. Moreover, simulations are able to explain the absence of pattern formation in certain cases. Our experimental results are compared with prediction using current linear theoretical models and applying the crater function formalism, as well as Monte Carlo simulations to calculate curvature coefficients using the SDTrimSP program. Calculations indicate that no patterns should be generated up to 45° incidence angle if the dynamic behavior of the thickness of the ion irradiated layer introduced by Hofsäss is taken into account, while pattern formation most pronounced from 50° for ion energy between 250 eV and 1500 eV, which are in good agreement with our experimental data.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-30
... DEPARTMENT OF STATE [Public Notice 8445] Culturally Significant Objects Imported for Exhibition Determinations: ``Venetian Glass by Carlo Scarpa: The Venini Company, 1932-1947'' SUMMARY: Notice is hereby given... objects to be included in the exhibition ``Venetian Glass by Carlo Scarpa: The Venini Company, 1932-1947...
A Primer in Monte Carlo Integration Using Mathcad
ERIC Educational Resources Information Center
Hoyer, Chad E.; Kegerreis, Jeb S.
2013-01-01
The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…
An Intentional Laboratory: The San Carlos Charter Learning Center.
ERIC Educational Resources Information Center
Darwish, Elise
2000-01-01
Describes the San Carlos Charter Learning Center, a K-8 school chartered by the San Carlos, California, school district to be a research and development site. It has successfully shared practices in multi-age groupings, interdisciplinary instruction, parents as teachers, and staff evaluation. The article expands on the school's challenges and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthikeyan, N., E-mail: karthin10@gmail.com; Sivakumar, K.; Pachamuthu, M. P.
We focus on the application of powder diffraction data to get abinitio crystal structure determination of thiophene derived 1,4 DHP prepared by cyclocondensation method using solid catalyst. Crystal structure of the compound has been solved by direct-space approach on Monte Carlo search in parallel tempering mode using FOX program. Initial atomic coordinates were derived using Gaussian 09W quantum chemistry software in semi-empirical approach and Rietveld refinement was carried out using GSAS program. The crystal structure of the compound is stabilized by one N-H…O and three C-H…O hydrogen bonds. PIXEL lattice energy calculation was carried out to understand the physical naturemore » of intermolecular interactions in the crystal packing, on which the total lattice energy is contributed into Columbic, polarization, dispersion, and repulsion energies.« less
CheckMATE 2: From the model to the limit
NASA Astrophysics Data System (ADS)
Dercks, Daniel; Desai, Nishita; Kim, Jong Soo; Rolbiecki, Krzysztof; Tattersall, Jamie; Weber, Torsten
2017-12-01
We present the latest developments to the CheckMATE program that allows models of new physics to be easily tested against the recent LHC data. To achieve this goal, the core of CheckMATE now contains over 60 LHC analyses of which 12 are from the 13 TeV run. The main new feature is that CheckMATE 2 now integrates the Monte Carlo event generation via MadGraph5_aMC@NLO and Pythia 8. This allows users to go directly from a SLHA file or UFO model to the result of whether a model is allowed or not. In addition, the integration of the event generation leads to a significant increase in the speed of the program. Many other improvements have also been made, including the possibility to now combine signal regions to give a total likelihood for a model.
Effects of the specular Orbiter forward radiators on a typical Spacelab payload thermal environment
NASA Technical Reports Server (NTRS)
Turner, L. D.; Humphries, W. R.; Littles, J. W.
1981-01-01
Orbiter radiators, having a specular reflection, must be considered when determining the design environment for payloads which can view the forward deployed radiators. Unlike most surfaces on the Orbiter, which reflect energy diffusely, the radiators are covered with a highly specular silverized Teflon material, with high emissivity, and have a concave contour, producing a local concentration of reflected energy towards the region of angle incidence. The combined effects of radiator specularity and geometry were analyzed using the Thermal Radiation Analysis System (TRASYS II), a specialized ray trace program, and a generalized Monte-Carlo-based thermal radiation program. Data given for a 0 deg payload inclination angle at orbital noon at 3.454 m indicate that the maximum total flux and average flux can increase 173% and 63%, respectively, when compared to diffuse radiators.
An unbiased Hessian representation for Monte Carlo PDFs.
Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker
NASA Astrophysics Data System (ADS)
Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.
2004-12-01
In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Discrete Diffusion Monte Carlo for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Cell-veto Monte Carlo algorithm for long-range systems.
Kapfer, Sebastian C; Krauth, Werner
2016-09-01
We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.
Nuclide Depletion Capabilities in the Shift Monte Carlo Code
Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...
2017-12-21
A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.
Ground state of excitonic molecules by the Green's-function Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M.A.; Vashishta, P.; Kalia, R.K.
1983-12-26
The ground-state energy of excitonic molecules is evaluated as a function of the ratio of electron and hole masses, sigma, with use of the Green's-function Monte Carlo method. For all sigma, the Green's-function Monte Carlo energies are significantly lower than the variational estimates and in favorable agreement with experiments. In excitonic rydbergs, the binding energy of the positronium molecule (sigma = 1) is predicted to be -0.06 and for sigma<<1, the Green's-function Monte Carlo energies agree with the ''exact'' limiting behavior, E = -2.346+0.764sigma.
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.
HepML, an XML-based format for describing simulated data in high energy physics
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.
2010-10-01
In this paper we describe a HepML format and a corresponding C++ library developed for keeping complete description of parton level events in a unified and flexible form. HepML tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. A HepML block can be included into event files in the LHEF format. The structure of the HepML block is described by means of several XML Schemas. The Schemas define necessary information for the HepML block and how this information should be located within the block. The library libhepml is a C++ library intended for parsing and serialization of HepML tags, and representing the HepML block in computer memory. The library is an API for external software. For example, Matrix Element Monte Carlo event generators can use the library for preparing and writing a header of an LHEF file in the form of HepML tags. In turn, Showering and Hadronization event generators can parse the HepML header and get the information in the form of C++ classes. libhepml can be used in C++, C, and Fortran programs. All necessary parts of HepML have been prepared and we present the project to the HEP community. Program summaryProgram title: libhepml Catalogue identifier: AEGL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 138 866 No. of bytes in distributed program, including test data, etc.: 613 122 Distribution format: tar.gz Programming language: C++, C Computer: PCs and workstations Operating system: Scientific Linux CERN 4/5, Ubuntu 9.10 RAM: 1 073 741 824 bytes (1 Gb) Classification: 6.2, 11.1, 11.2 External routines: Xerces XML library ( http://xerces.apache.org/xerces-c/), Expat XML Parser ( http://expat.sourceforge.net/) Nature of problem: Monte Carlo simulation in high energy physics is divided into several stages. Various programs exist for these stages. In this article we are interested in interfacing different Monte Carlo event generators via data files, in particular, Matrix Element (ME) generators and Showering and Hadronization (SH) generators. There is a widely accepted format for data files for such interfaces - Les Houches Event Format (LHEF). Although information kept in an LHEF file is enough for proper working of SH generators, it is insufficient for understanding how events in the LHEF file have been prepared and which physical model has been applied. In this paper we propose an extension of the format for keeping additional information available in generators. We propose to add a new information block, marked up with XML tags, to the LHEF file. This block describes events in the file in more detail. In particular, it stores information about a physical model, kinematical cuts, generator, etc. This helps to make LHEF files self-documented. Certainly, HepML can be applied in more general context, not in LHEF files only. Solution method: In order to overcome drawbacks of the original LHEF accord we propose to add a new information block of HepML tags. HepML is an XML-based markup language. We designed several XML Schemas for all tags in the language. Any HepML document should follow rules of the Schemas. The language is equipped with a library for operation with HepML tags and documents. This C++ library, called libhepml, consists of classes for HepML objects, which represent a HepML document in computer memory, parsing classes, serializating classes, and some auxiliary classes. Restrictions: The software is adapted for solving problems, described in the article. There are no additional restrictions. Running time: Tests have been done on a computer with Intel(R) Core(TM)2 Solo, 1.4 GHz. Parsing of a HepML file: 6 ms (size of the HepML files is 12.5 Kb) Writing of a HepML block to file: 14 ms (file size 12.5 Kb) Merging of two HepML blocks and writing to file: 18 ms (file size - 25.0 Kb).
Computing Interactions Of Free-Space Radiation With Matter
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.
1995-01-01
High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.
Accelerating Pseudo-Random Number Generator for MCNP on GPU
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu
2010-09-01
Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.
Hyper-X Stage Separation: Simulation Development and Results
NASA Technical Reports Server (NTRS)
Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.
2001-01-01
This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.
Project MINERVA's Follow-up on Wide-Field, Small Telescope Photometry to Identify Exoplanets
NASA Astrophysics Data System (ADS)
Houghton, Audrey; Henderson, Morgan; Johnson, Samson; Sergi, Anthony; Eastman, Jason D.; Beatty, Thomas G.; McCrady, Nate
2017-01-01
MINERVA is an array of four 0.7-m telescopes equipped for high precision photometry and spectroscopy dedicated to exoplanet observations. During the first 18 months of science operations, MINERVA engaged in a program of photometric follow-up of potential transiting exoplanet targets identified by the Kilodegree Extremely Little Telescope (KELT). Robotically-obtained observations are passed through our data reduction pipeline and we extract light curves via differential photometry. We seek transit signals via a Markov chain Monte Carlo fit using BATMAN. We discuss results for over 100 target stars analyzed to date.
Accuracy of remotely sensed data: Sampling and analysis procedures
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Oderwald, R. G.; Mead, R. A.
1982-01-01
A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.
LES, DNS and RANS for the analysis of high-speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Adumitroaie, V.; Colucci, P. J.; Taulbee, D. B.; Givi, P.
1995-01-01
The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Aug. 1994 - 31 Jul. 1995, we have focused our efforts on two programs: (1) developments of explicit algebraic moment closures for statistical descriptions of compressible reacting flows and (2) development of Monte Carlo numerical methods for LES of chemically reacting flows.
A Survey and Analysis of Frameworks and Framework Issues for Information Fusion Applications
NASA Astrophysics Data System (ADS)
Llinas, James
This paper was stimulated by the proposed project for the Santander Bank-sponsored "Chairs of Excellence" program in Spain, of which the author is a recipient. That project involves research on characterizing a robust, problem-domain-agnostic framework in which Information Fusion (IF) processes of all description, to include artificial intelligence processes and techniques could be developed. The paper describes the IF process and its requirements, a literature survey on IF frameworks, and a new proposed framework that will be implemented and evaluated at Universidad Carlos III de Madrid, Colmenarejo Campus.
GEANT4 Simulation of Hadronic Interactions at 8-GeV/C to 10-GeV/C: Response to the HARP-CDP Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uzhinsky, V.; /Dubna, JINR /CERN; Apostolakis, J.
2011-11-21
The results of the HARP-CDP group on the comparison of GEANT4 Monte Carlo predictions versus experimental data are discussed. It is shown that the problems observed by the group are caused by an incorrect implementation of old features at the programming level, and by a lack of the nucleon Fermi motion in the simulation of quasielastic scattering. These drawbacks are not due to the physical models used. They do not manifest themselves in the most important applications of the GEANT4 toolkit.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Bellido, J. A.; Belov, K.; Belz, J. W.; Bergman, D. R.; Cao, Z.; Clay, R. W.; Cooper, M. D.; Dai, H.; Dawson, B. R.; Everett, A. A.; Girard, J. H. V.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kieda, D. B.; Kim, K.; Kirn, M. A.; Loh, E. C.; Manago, N.; Marek, L. J.; Martens, K.; Martin, G.; Manago, N.; Matthews, J. A. J.; Matthews, J. N.; Meyer, J. R.; Moore, S. A.; Morrison, P.; Moosman, A. N.; Mumford, J. R.; Munro, M. W.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Sarracino, J. S.; Schnetzer, S.; Shen, P.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, S. B.; Thompson, T. N.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; VanderVeen, T. D.; Zech, A.; Zhang, X.
2005-03-01
We have measured the spectrum of UHE cosmic rays using the Flash ADC (FADC) detector (called HiRes-II) of the High Resolution Fly's Eye experiment running in monocular mode. We describe in detail the data analysis, development of the Monte Carlo simulation program, and results. We also describe the results of the HiRes-I detector. We present our measured spectra and compare them with a model incorporating galactic and extragalactic cosmic rays. Our combined spectra provide strong evidence for the existence of the spectral feature known as the "ankle."
NASA Astrophysics Data System (ADS)
Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.
2008-10-01
The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is crucial for quantification in most microprobe studies, the influence of stereotaxic positioning error was studied for several realistic experiments in favorable and unfavorable experimental situations (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; binding of 18F-MPPF to 5HT1A receptors in the dorsal raphe nucleus).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, D; Jung, J; Suh, T
2014-06-01
Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
ERIC Educational Resources Information Center
Mao, Xiuzhen; Xin, Tao
2013-01-01
The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…
Modifying the Monte Carlo Quiz to Increase Student Motivation, Participation, and Content Retention
ERIC Educational Resources Information Center
Simonson, Shawn R.
2017-01-01
Fernald developed the Monte Carlo Quiz format to enhance retention, encourage students to prepare for class, read with intention, and organize information in psychology classes. This author modified the Monte Carlo Quiz, combined it with the Minute Paper, and applied it to various courses. Students write quiz questions as part of the Minute Paper…
USSR Report, International Affairs
1986-06-23
Carlos Carvajal, Pablo Ramos Sanchez Interview; LATINSKAYA AMERIKA, No 12, Dec 85) .14 Work of Cuban Center for American Studies Described... Carlos Carvajal, chairman of the Bolivian Committee for Peace and Democracy, and Pablo Ramos Sanchez, rector of San Andres University, by unnamed...LATINSKAYA AMERIKA journalists, date and place not specified] [Text] Carlos Carvajal, chairman of the Bolivian committee for Peace and Democracy
The Monte Carlo Method. Popular Lectures in Mathematics.
ERIC Educational Resources Information Center
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
4. Photographic copy of map. San Carlos Irrigation Project, Gila ...
4. Photographic copy of map. San Carlos Irrigation Project, Gila River Indian Reservation, Pinal County, Arizona. Department of the Interior. Office of Indian Affairs. 1940. (Source: SCIP Office, Coolidge, AZ) Photograph is an 8'x10' enlargement from a 4'x5' negative. - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna
2014-09-15
Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.« less
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
NASA Astrophysics Data System (ADS)
Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.
2015-03-01
The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Montemore » Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.« less
Use of Fluka to Create Dose Calculations
NASA Technical Reports Server (NTRS)
Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John
2012-01-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
NASA Astrophysics Data System (ADS)
Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.
2012-06-01
A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy
NASA Astrophysics Data System (ADS)
Sharma, Sanjib
2017-08-01
Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
NASA Astrophysics Data System (ADS)
Kim, Jeongnim; Baczewski, Andrew D.; Beaudet, Todd D.; Benali, Anouar; Chandler Bennett, M.; Berrill, Mark A.; Blunt, Nick S.; Josué Landinez Borda, Edgar; Casula, Michele; Ceperley, David M.; Chiesa, Simone; Clark, Bryan K.; Clay, Raymond C., III; Delaney, Kris T.; Dewing, Mark; Esler, Kenneth P.; Hao, Hongxia; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M. Graham; Luo, Ye; Malone, Fionn D.; Martin, Richard M.; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A.; Mitas, Lubos; Morales, Miguel A.; Neuscamman, Eric; Parker, William D.; Pineda Flores, Sergio D.; Romero, Nichols A.; Rubenstein, Brenda M.; Shea, Jacqueline A. R.; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F.; Townsend, Joshua P.; Tubman, Norm M.; Van Der Goetz, Brett; Vincent, Jordan E.; ChangMo Yang, D.; Yang, Yubo; Zhang, Shuai; Zhao, Luning
2018-05-01
QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater–Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.
Analysis of Naval Ammunition Stock Positioning
2015-12-01
model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17
ERIC Educational Resources Information Center
Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael
2017-01-01
The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…
Thomas B. Lynch; Rodney E. Will; Rider Reynolds
2013-01-01
Preliminary results are given for development of an eastern redcedar (Juniperus virginiana) cubic-volume equation based on measurements of redcedar sample tree stem volume using dendrometry with Monte Carlo integration. Monte Carlo integration techniques can be used to provide unbiased estimates of stem cubic-foot volume based on upper stem diameter...
[Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].
Furuta, Takuya
2017-01-01
Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.
Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F
2015-02-01
The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Physics Computing '92: Proceedings of the 4th International Conference
NASA Astrophysics Data System (ADS)
de Groot, Robert A.; Nadrchal, Jaroslav
1993-04-01
The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Diagnosing Undersampling Biases in Monte Carlo Eigenvalue and Flux Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M.; Rearden, Bradley T.; Marshall, William J.
2017-02-08
Here, this study focuses on understanding the phenomena in Monte Carlo simulations known as undersampling, in which Monte Carlo tally estimates may not encounter a sufficient number of particles during each generation to obtain unbiased tally estimates. Steady-state Monte Carlo simulations were performed using the KENO Monte Carlo tools within the SCALE code system for models of several burnup credit applications with varying degrees of spatial and isotopic complexities, and the incidence and impact of undersampling on eigenvalue and flux estimates were examined. Using an inadequate number of particle histories in each generation was found to produce a maximum bias of ~100 pcm in eigenvalue estimates and biases that exceeded 10% in fuel pin flux tally estimates. Having quantified the potential magnitude of undersampling biases in eigenvalue and flux tally estimates in these systems, this study then investigated whether Markov Chain Monte Carlo convergence metrics could be integrated into Monte Carlo simulations to predict the onset and magnitude of undersampling biases. Five potential metrics for identifying undersampling biases were implemented in the SCALE code system and evaluated for their ability to predict undersampling biases by comparing the test metric scores with the observed undersampling biases. Finally, of the five convergence metrics that were investigated, three (the Heidelberger-Welch relative half-width, the Gelman-Rubin more » $$\\hat{R}_c$$ diagnostic, and tally entropy) showed the potential to accurately predict the behavior of undersampling biases in the responses examined.« less
Imaging skin pathologies with polarized light: Empirical and theoretical studies
NASA Astrophysics Data System (ADS)
Ramella-Roman, Jessica C.
The use of polarized light imaging can facilitate the determination of skin cancer borders before a Mohs surgery procedure. Linearly polarized light that illuminates the skin is backscattered by superficial layers where cancer often arises and is randomized by the collagen fibers. The superficially backscattered light can be distinguished from the diffused reflected light using a detector analyzer that is sequentially oriented parallel and perpendicular to the source polarization. A polarized image pol = parallel - perpendicular / parallel + perpendicular is generated. This image has a higher contrast to the superficial skin layers than simple total reflectance images. Pilot clinical trials were conducted with a small hand-held device for the accumulation of a library of lesions to establish the efficacy of polarized light imaging in vivo. It was found that melanoma exhibits a high contrast to polarized light imaging as well as basal and sclerosing cell carcinoma. Mechanisms of polarized light scattering from different tissues and tissue phantoms were studied in vitro. Parameters such as depth of depolarization (DOD), retardance, and birefringence were studied in theory and experimentally. Polarized light traveling through different tissues (skin, muscle, and liver) depolarized after a few hundred microns. Highly birefringent materials such as skin (DOD = 300 mum 696nm) and muscle (DOD = 370 mum 696nm) depolarized light faster than less birefringent materials such as liver (DOD = 700 mum 696nm). Light depolarization can also be attributed to scattering. Three Monte Carlo programs for modeling polarized light transfer into scattering media were implemented to evaluate these mechanisms. Simulations conducted with the Monte Carlo programs showed that small diameter spheres have different mechanisms of depolarization than larger ones. The models also showed that the anisotropy parameter g strongly influences the depolarization mechanism. (Abstract shortened by UMI.)
Coherent Backscattering by Particulate Planetary Media of Nonspherical Particles
NASA Astrophysics Data System (ADS)
Muinonen, Karri; Penttila, Antti; Wilkman, Olli; Videen, Gorden
2014-11-01
The so-called radiative-transfer coherent-backscattering method (RT-CB) has been put forward as a practical Monte Carlo method to compute multiple scattering in discrete random media mimicking planetary regoliths (K. Muinonen, Waves in Random Media 14, p. 365, 2004). In RT-CB, the interaction between the discrete scatterers takes place in the far-field approximation and the wave propagation faces exponential extinction. There is a significant constraint in the RT-CB method: it has to be assumed that the form of the scattering matrix is that of the spherical particle. We aim to extend the RT-CB method to nonspherical single particles showing significant depolarization characteristics. First, ensemble-averaged single-scattering albedos and phase matrices of nonspherical particles are matched using a phenomenological radiative-transfer model within a microscopic volume element. Second, the phenomenologial single-particle model is incorporated into the Monte Carlo RT-CB method. In the ray tracing, the electromagnetic phases within the microscopic volume elements are omitted as having negligible lengths, whereas the phases are duly accounted for in the paths between two or more microscopic volume elements. We assess the computational feasibility of the extended RT-CB method and show preliminary results for particulate media mimicking planetary regoliths. The present work can be utilized in the interpretation of astronomical observations of asteroids and other planetary objects. In particular, the work sheds light on the depolarization characteristics of planetary regoliths at small phase angles near opposition. The research has been partially funded by the ERC Advanced Grant No 320773 entitled “Scattering and Absorption of Electromagnetic Waves in Particulate Media” (SAEMPL), by the Academy of Finland (contract 257966), NASA Outer Planets Research Program (contract NNX10AP93G), and NASA Lunar Advanced Science and Exploration Research Program (contract NNX11AB25G).
al3c: high-performance software for parameter inference using Approximate Bayesian Computation.
Stram, Alexander H; Marjoram, Paul; Chen, Gary K
2015-11-01
The development of Approximate Bayesian Computation (ABC) algorithms for parameter inference which are both computationally efficient and scalable in parallel computing environments is an important area of research. Monte Carlo rejection sampling, a fundamental component of ABC algorithms, is trivial to distribute over multiple processors but is inherently inefficient. While development of algorithms such as ABC Sequential Monte Carlo (ABC-SMC) help address the inherent inefficiencies of rejection sampling, such approaches are not as easily scaled on multiple processors. As a result, current Bayesian inference software offerings that use ABC-SMC lack the ability to scale in parallel computing environments. We present al3c, a C++ framework for implementing ABC-SMC in parallel. By requiring only that users define essential functions such as the simulation model and prior distribution function, al3c abstracts the user from both the complexities of parallel programming and the details of the ABC-SMC algorithm. By using the al3c framework, the user is able to scale the ABC-SMC algorithm in parallel computing environments for his or her specific application, with minimal programming overhead. al3c is offered as a static binary for Linux and OS-X computing environments. The user completes an XML configuration file and C++ plug-in template for the specific application, which are used by al3c to obtain the desired results. Users can download the static binaries, source code, reference documentation and examples (including those in this article) by visiting https://github.com/ahstram/al3c. astram@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Albert, Steven M; Raviotta, Jonathan; Lin, Chyongchiou J; Edelstein, Offer; Smith, Kenneth J
2016-10-01
Pennsylvania's Department of Aging has offered a falls prevention program, "Healthy Steps for Older Adults" (HSOA), since 2005, with about 40,000 older adults screened for falls risk. In 2010 to 2011, older adults 50 years or older who completed HSOA (n = 814) had an 18% reduction in falls incidence compared with a comparison group that attended the same senior centers (n = 1019). We examined the effect of HSOA on hospitalization and emergency department (ED) treatment, and estimated the potential cost savings. Decision-tree analysis. The following were included in a decision-tree model based on a prior longitudinal cohort study: costs of the intervention, number of falls, frequency and costs of ED visits and hospitalizations, and self-reported quality of life of individuals in each outcome condition. A Monte Carlo probabilistic sensitivity analysis assigned appropriate distributions to all input parameters and evaluated model results over 500 iterations. The model included all ED and hospitalization episodes rather than just episodes linked to falls. Over 12 months of follow-up, 11.3% of the HSOA arm and 14.8% of the comparison group experienced 1 or more hospitalizations (P = .04). HSOA participants had less hospital care when matched for falls status. Observed values suggest expected costs per participant of $3013 in the HSOA arm and $3853 in the comparison condition, an average savings of $840 per person. Results were confirmed in Monte Carlo simulations ($3164 vs $3882, savings of $718). The savings of $718 to $840 per person is comparable to reports from other falls prevention economic evaluations. The advantages of HSOA include its statewide reach and integration with county aging services.
High Altitude Radiations Relevant to the High Speed Civil Transport (HSCT)
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Goldhagan, P.; Maiden, D. L.; Tai, H.
2004-01-01
The Langley Research Center (LaRC) performed atmospheric radiation studies under the SST development program in which important ionizing radiation components were measured and extended by calculations to develop the existing atmospheric ionizing radiation (AIR) model. In that program the measured neutron spectrum was limited to less than 10 MeV by the available 1960-1970 instrumentation. Extension of the neutron spectrum to high energies was made using the LaRC PROPER-3C monte carlo code. It was found that the atmospheric neutrons contributed about half of the dose equivalent and approximately half of the neutron contribution was from high energy neutrons above 10 MeV. Furthermore, monte carlo calculations of solar particle events showed that potential exposures as large as 10-100 mSv/hr may occur on important high latitude routes but acceptable levels of exposure could be obtained if timely descent to subsonic altitudes could be made. The principal concern was for pregnant occupants onboard the aircraft. As a result of these studies the FAA Advisory Committee on the Radiobiological Aspects of the SST recommended: 1. Crew members will have to be informed of their exposure levels 2. Maximum exposures on any flight to be limited to 5 mSv 3. Airborne radiation detection devices for total exposure and exposure rates 4. Satellite monitoring system to provide SST aircraft real-time information on atmospheric radiation levels for exposure mitigation 5. A solar forecasting system to warn flight operations of an impending solar event for flight scheduling and alert status. These recommendations are a reasonable starting point to requirements for the HSCT with some modification reflecting new standards of protection as a result of changing risk coefficients.
Minimum length Pb/SCIN detector for efficient cosmic ray identification
NASA Technical Reports Server (NTRS)
Snyder, H. David
1989-01-01
A study was made of the performance of a minimal length cosmic ray shower detector that would be light enough for space flight and would provide efficient identification of positions and protons. Cosmic ray positions are mainly produced in the decay chain of: Pion yields Muon yields Positron and they provide a measure of the matter density traversed by primary protons. Present positron flux measurements are consistent with the Leaky Box and Halo models for sources of cosmic rays. Abundant protons in the space environment are a significant source of background that would wash out the positron signal. Protons and positrons produced very distictive showers of particles when they enter matter; many studies have been published on their behavior on large calorimeter detectors. The challenge is to determine the minimal material necessary (minimal calorimeter depth) for positive particles identification. The primary instrument for the investigation is the Monte Carlo code GEANT, a library of programs from CERN that can be used to model experimental geometry, detector responses and particle interaction processes. The use of the Monte Carlo approach is crucial since statistical fluctuations in shower shape are significant. Studies conducted during the 1988 summer program showed that straightforward approaches to the problem achieved 85 to 90 percent correct identification, but left a residue of 10 to 15 percent misidentified particles. This percentage improved to a few percent when multiple shower-cut criteria were applied to the data. This summer, the same study was extended to employ several physical and statistical methods of identifying response of the calorimeter and the efficiency of the optimal shower cuts to off-normal incidence particle was determined.
Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.
Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A
2005-01-01
The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.
Translations on Narcotics and Dangerous Drugs No. 270.
1976-11-04
persons arrested were: Hugo Norberto Ammirante,30; Urbano Vicente Toledo, 31; Carlos Roberto Cejas, 28; Leandro Alberto Herrera, 27; and... Carlos Alberto Ignatier, 33. The ring brought cocaine from Bolivia and processed it in Salt a, selling it mainly to someone known as El Chuco who...Garcia. Some escaped to Brownsville, among them Inspector Carlos Morales who walked across the bridge and Supervisor Joaquin Cardenas who abandoned
Latin America Report, No. 2762
1983-11-10
1983 LATIN AMERICA REPORT No, 2762 CONTENTS ENERGY ECONOMICS COLOMBIA Government Will Seek To Regain Petroleum Self-Sufficiency ( Carlos Pineros... Carlos Pineros; EL TIEMPO, 27 Sep 83) 26 Briefs Mass Grave Discovered 29 Liberal Party Central Committee 29 New Naval Base 29 Prague, Moscow...ENERGY ECONOMICS COLOMBIA GOVERNMENT WILL SEEK TO REGAIN PETROLEUM SELF-SUFFICIENCY Bogota EL TIEMPO in Spanish 29 Sep 83 p 8-A [Article by Carlos
The Gift of the Poinsettia = El regalo de la flor de Nochebuena.
ERIC Educational Resources Information Center
Mora, Pat; Berg, Charles Ramirez
This bilingual (English and Spanish) illustrated children's book relates the story of Carlos, a young boy who lives with his aunt Nina and dog Chico in the small Mexican town of San Bernardo. Nina and Carlos are poor, but their house is full of love. Carlos is excited because he is getting ready to attend the first night of "las…
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.
Crystallographic data processing for free-electron laser sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Thomas A., E-mail: taw@physics.org; Barty, Anton; Stellato, Francesco
2013-07-01
A processing pipeline for diffraction data acquired using the ‘serial crystallography’ methodology with a free-electron laser source is described with reference to the crystallographic analysis suite CrystFEL and the pre-processing program Cheetah. A processing pipeline for diffraction data acquired using the ‘serial crystallography’ methodology with a free-electron laser source is described with reference to the crystallographic analysis suite CrystFEL and the pre-processing program Cheetah. A detailed analysis of the nature and impact of indexing ambiguities is presented. Simulations of the Monte Carlo integration scheme, which accounts for the partially recorded nature of the diffraction intensities, are presented and show thatmore » the integration of partial reflections could be made to converge more quickly if the bandwidth of the X-rays were to be increased by a small amount or if a slight convergence angle were introduced into the incident beam.« less
Monte Carlo simulation of a photodisintegration of 3 H experiment in Geant4
NASA Astrophysics Data System (ADS)
Gray, Isaiah
2013-10-01
An upcoming experiment involving photodisintegration of 3 H at the High Intensity Gamma-Ray Source facility at Duke University has been simulated in the software package Geant4. CAD models of silicon detectors and wire chambers were imported from Autodesk Inventor using the program FastRad and the Geant4 GDML importer. Sensitive detectors were associated with the appropriate logical volumes in the exported GDML file so that changes in detector geometry will be easily manifested in the simulation. Probability distribution functions for the energy and direction of outgoing protons were generated using numerical tables from previous theory, and energies and directions were sampled from these distributions using a rejection sampling algorithm. The simulation will be a useful tool to optimize detector geometry, estimate background rates, and test data analysis algorithms. This work was supported by the Triangle Universities Nuclear Laboratory REU program at Duke University.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
Remote sensing of chlorophyll in an atmosphere-ocean environment: a theoretical study.
Kattawar, G W; Humphreys, T J
1976-01-01
A Monte Carlo program was written to compute the effect of chlorophyll on the ratio of upwelling to down-welling radiance and irradiance as a function of wavelength, height above the ocean, and depth within the ocean. This program simulates the actual physical situation, since a real atmospheric model was used, i.e., one that contained both aerosol and Rayleigh scattering as well as ozone absorption. The complete interaction of the radiation field with the ocean was also taken into account. The chlorophyll was assumed to be uniformly mixed in the ocean and was also assumed to act only as an absorbing agent. For the ocean model both scattering and absorption by hydrosols was included. Results have been obtained for both a very clear ocean and a medium turbid ocean. Recommendations are made for optimum techniques for remotely sensing chlorophyll both in situ and in vitro.
Ören, Ünal; Hiller, Mauritius; Andersson, M
2017-04-28
A Monte Carlo-based stand-alone program, IDACstar (Internal Dose Assessment by Computer), was developed, dedicated to perform radiation dose calculations using complex voxel simulations. To test the program, two irradiation situations were simulated, one hypothetical contamination case with 600 MBq of 99mTc and one extravasation case involving 370 MBq of 18F-FDG. The effective dose was estimated to be 0.042 mSv for the contamination case and 4.5 mSv for the extravasation case. IDACstar has demonstrated that dosimetry results from contamination or extravasation cases can be acquired with great ease. An effective tool for radiation protection applications is provided with IDACstar allowing physicists at nuclear medicine departments to easily quantify the radiation risk of stochastic effects when a radiation accident has occurred. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Elbashir, B. O.; Dong, M. G.; Sayyed, M. I.; Issa, Shams A. M.; Matori, K. A.; Zaid, M. H. M.
2018-06-01
The mass attenuation coefficients (μ/ρ), effective atomic numbers (Zeff) and electron densities (Ne) of some amino acids obtained experimentally by the other researchers have been calculated using MCNP5 simulations in the energy range 0.122-1.330 MeV. The simulated values of μ/ρ, Zeff, and Ne were compared with the previous experimental work for the amino acids samples and a good agreement was noticed. Moreover, the values of mean free path (MFP) for the samples were calculated using MCNP5 program and compared with the theoretical results obtained by XCOM. The investigation of μ/ρ, Zeff, Ne and MFP values of amino acids using MCNP5 simulations at various photon energies when compared with the XCOM values and previous experimental data for the amino acids samples revealed that MCNP5 code provides accurate photon interaction parameters for amino acids.
Benchmarks for target tracking
NASA Astrophysics Data System (ADS)
Dunham, Darin T.; West, Philip D.
2011-09-01
The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.
Updated System-Availability and Resource-Allocation Program
NASA Technical Reports Server (NTRS)
Viterna, Larry
2004-01-01
A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.