Science.gov

Sample records for 3-d monte-carlo analysis

  1. NOTE: A software tool for 2D/3D visualization and analysis of phase-space data generated by Monte Carlo modelling of medical linear accelerators

    NASA Astrophysics Data System (ADS)

    Neicu, Toni; Aljarrah, Khaled M.; Jiang, Steve B.

    2005-10-01

    A computer program has been developed for novel 2D/3D visualization and analysis of the phase-space parameters of Monte Carlo simulations of medical accelerator radiation beams. The software is written in the IDL language and reads the phase-space data generated in the BEAMnrc/BEAM Monte Carlo code format. Contour and colour-wash plots of the fluence, mean energy, energy fluence, mean angle, spectra distribution, energy fluence distribution, angular distribution, and slices and projections of the 3D ZLAST distribution can be calculated and displayed. Based on our experience of using it at Massachusetts General Hospital, the software has proven to be a useful tool for analysis and verification of the Monte Carlo generated phase-space files. The software is in the public domain.

  2. Recovering 3D images of polymeric nanofibers in solution through theoretical analysis and Monte-Carlo simulations of their 2D TEM images.

    PubMed

    Miao, Han; Li, Jianfeng; Chen, Daoyong

    2016-05-18

    Nanofibers are well-known nanomaterials that are promising for many important applications. Since sample preparation for the applications usually starts from a nanofiber solution, characterization of the original conformation of nanofibers in the solution is significant because the conformation affects remarkably the behavior of nanofibers in the samples. However, the characterization is very difficult by existing methods: light scattering can only roughly evaluate the conformation in solution; cryo-TEM is laborious, time-consuming, and challenging technically, and thus difficult to study a system statistically. Herein we report a novel and reliable method to recover the 3D original image of nanofibers in solution through theoretical analysis and Monte-Carlo simulations of TEM images of the nanofibers. Firstly, six kinds of monodisperse nanofibers with the same composition and inner structure but different contour lengths were prepared by the method developed in our laboratory. Then, each kind of nanofiber deposited on the substrate of the TEM sample was measured by TEM and meanwhile simulated by the Monte Carlo method. By matching the simulation results with the TEM results, we determined information about the nanofibers including their rigidity and the interaction between the nanofibers and the substrate. Furthermore, for each kind of nanofiber, based on the information, 3D images of the nanofibers in solution can be re-constructed, and then the average gyration radius and hydrodynamic radius can be calculated, which were compared with the corresponding values measured experimentally to demonstrate the reliability of this method. PMID:27101798

  3. Monte Carlo Ion Transport Analysis Code.

    Energy Science and Technology Software Center (ESTSC)

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  4. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  5. MCMAC: Monte Carlo Merger Analysis Code

    NASA Astrophysics Data System (ADS)

    Dawson, William A.

    2014-07-01

    Monte Carlo Merger Analysis Code (MCMAC) aids in the study of merging clusters. It takes observed priors on each subcluster's mass, radial velocity, and projected separation, draws randomly from those priors, and uses them in a analytic model to get posterior PDF's for merger dynamic properties of interest (e.g. collision velocity, time since collision).

  6. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  7. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  8. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  9. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  10. Monte Carlo generators for studies of the 3D structure of the nucleon

    SciTech Connect

    Avakian, Harut; D'Alesio, U.; Murgia, F.

    2015-01-23

    In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.

  11. CT based 3D Monte Carlo radiation therapy treatment planning.

    PubMed

    Wallace, S; Allen, B J

    1998-06-01

    This paper outlines the "voxel reconstruction" technique used to model the macroscopic human anatomy of the cranial, abdominal and cervical regions directly from CT scans. Tissue composition, density, and radiation transport characteristics were assigned to each individual volume element (voxel) automatically depending on its greyscale number and physical location. Both external beam and brachytherapy treatment techniques were simulated using the Monte Carlo radiation transport code MCNP (Monte Carlo N-Particle) version 3A. To obtain a high resolution dose calculation, yet not overly extend computational times, variable voxel sizes have been introduced. In regions of interest where high attention to anatomical detail and dose calculation was required, the voxel dimensions were reduced to a few millimetres. In less important regions that only influence the region of interest via scattered radiation, the voxel dimensions were increased to the scale of centimetres. With the use of relatively old (1991) supercomputing hardware, dose calculations were performed in under 10 hours to a standard deviation of 5% in each voxel with a resolution of a few millimetres--current hardware should substantially improve these figures. It is envisaged that with coupled photon/electron transport incorporated into MCNP version 4A and 4B, conventional photon and electron treatment planning will be undertaken using this technique, in addition to neutron and associated photon dosimetry presented here. PMID:9745789

  12. A Monte Carlo method for 3D thermal infrared radiative transfer

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Liou, K. N.

    2006-09-01

    A 3D Monte Carlo model for specific application to the broadband thermal radiative transfer has been developed in which the emissivities for gases and cloud particles are parameterized by using a single cubic element as the building block in 3D space. For spectral integration in the thermal infrared, the correlated k-distribution method has been used for the sorting of gaseous absorption lines in multiple-scattering atmospheres involving 3D clouds. To check the Monte-Carlo simulation, we compare a variety of 1D broadband atmospheric fluxes and heating rates to those computed from the conventional plane-parallel (PP) model and demonstrate excellent agreement between the two. Comparisons of the Monte Carlo results for broadband thermal cooling rates in 3D clouds to those computed from the delta-diffusion approximation for 3D radiative transfer and the independent pixel-by-pixel approximation are subsequently carried out to understand the relative merits of these approaches.

  13. Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods

    SciTech Connect

    Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.

  14. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    Energy Science and Technology Software Center (ESTSC)

    1998-01-13

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  15. PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, T.J.

    1998-12-01

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  16. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  17. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  18. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  19. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  20. Imidazo[1,2-a]pyrazine inhibitors of phosphoinositide 3-kinase alpha (PI3Kα): 3D-QSAR analysis utilizing the Hybrid Monte Carlo algorithm to refine receptor-ligand complexes for molecular alignment.

    PubMed

    Chadha, N; Jasuja, H; Kaur, M; Singh Bahia, M; Silakari, O

    2014-01-01

    Phosphoinositide 3-kinase alpha (PI3Kα) is a lipid kinase involved in several cellular functions such as cell growth, proliferation, differentiation and survival, and its anomalous regulation leads to cancerous conditions. PI3Kα inhibition completely blocks the cancer signalling pathway, hence it can be explored as an important therapeutic target for cancer treatment. In the present study, docking analysis of 49 selective imidazo[1,2-a]pyrazine inhibitors of PI3Kα was carried out using the QM-Polarized ligand docking (QPLD) program of the Schrödinger software, followed by the refinement of receptor-ligand conformations using the Hybrid Monte Carlo algorithm in the Liaison program, and alignment of refined conformations of inhibitors was utilized for the development of an atom-based 3D-QSAR model in the PHASE program. Among the five generated models, the best model was selected corresponding to PLS factor 2, displaying the highest value of Q(2)test (0.650). The selected model also displayed high values of r(2)train (0.917), F-value (166.5) and Pearson-r (0.877) and a low value of SD (0.265). The contour plots generated for the selected 3D-QSAR model were correlated with the results of docking simulations. Finally, this combined information generated from 3D-QSAR and docking analysis was used to design new congeners. PMID:24601789

  1. 3D dose distribution calculation in a voxelized human phantom by means of Monte Carlo method.

    PubMed

    Abella, V; Miró, R; Juste, B; Verdú, G

    2010-01-01

    The aim of this work is to provide the reconstruction of a real human voxelized phantom by means of a MatLab program and the simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, by using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project results in 3D dose mapping calculations inside the voxelized antropomorphic head phantom. The program provides the voxelization by first processing the CT slices; the process follows a two-dimensional pixel and material identification algorithm on each slice and three-dimensional interpolation in order to describe the phantom geometry via small cubic cells, resulting in an MCNP input deck format output. Dose rates are calculated by using the MCNP5 tool FMESH, superimposed mesh tally, which gives the track length estimation of the particle flux in units of particles/cm(2). Furthermore, the particle flux is converted into dose by using the conversion coefficients extracted from the NIST Physical Reference Data. The voxelization using a three-dimensional interpolation technique in combination with the use of the FMESH tool of the MCNP Monte Carlo code offers an optimal simulation which results in 3D dose mapping calculations inside anthropomorphic phantoms. This tool is very useful in radiation treatment assessments, in which voxelized phantoms are widely utilized. PMID:19892556

  2. 3-D Monte Carlo-Based Scatter Compensation in Quantitative I-131 SPECT Reconstruction

    PubMed Central

    Dewaraja, Yuni K.; Ljungberg, Michael; Fessler, Jeffrey A.

    2010-01-01

    We have implemented highly accurate Monte Carlo based scatter modeling (MCS) with 3-D ordered subsets expectation maximization (OSEM) reconstruction for I-131 single photon emission computed tomography (SPECT). The scatter is included in the statistical model as an additive term and attenuation and detector response are included in the forward/backprojector. In the present implementation of MCS, a simple multiple window-based estimate is used for the initial iterations and in the later iterations the Monte Carlo estimate is used for several iterations before it is updated. For I-131, MCS was evaluated and compared with triple energy window (TEW) scatter compensation using simulation studies of a mathematical phantom and a clinically realistic voxel-phantom. Even after just two Monte Carlo updates, excellent agreement was found between the MCS estimate and the true scatter distribution. Accuracy and noise of the reconstructed images were superior with MCS compared to TEW. However, the improvement was not large, and in some cases may not justify the large computational requirements of MCS. Furthermore, it was shown that the TEW correction could be improved for most of the targets investigated here by applying a suitably chosen scaling factor to the scatter estimate. Finally clinical application of MCS was demonstrated by applying the method to an I-131 radioimmunotherapy (RIT) patient study. PMID:20104252

  3. Monte Carlo analysis of magnetic aftereffect phenomena

    NASA Astrophysics Data System (ADS)

    Andrei, Petru; Stancu, Alexandru

    2006-04-01

    Magnetic aftereffect phenomena are analyzed by using the Monte Carlo technique. This technique has the advantage that it can be applied to any model of hysteresis. It is shown that a log t-type dependence of the magnetization can be qualitatively predicted even in the framework of hysteresis models with local history, such as the Jiles-Atherton model. These models are computationally much more efficient than the models with global history such as the Preisach model. Numerical results related to the decay of the magnetization as of function of time, as well as to the viscosity coefficient, are presented.

  4. 3D Direct Simulation Monte Carlo Modeling of the Spacecraft Environment of Rosetta

    NASA Astrophysics Data System (ADS)

    Bieler, A. M.; Tenishev, V.; Fougere, N.; Gombosi, T. I.; Hansen, K. C.; Combi, M. R.; Huang, Z.; Jia, X.; Toth, G.; Altwegg, K.; Wurz, P.; Jäckel, A.; Le Roy, L.; Gasc, S.; Calmonte, U.; Rubin, M.; Tzou, C. Y.; Hässig, M.; Fuselier, S.; De Keyser, J.; Berthelier, J. J.; Mall, U. A.; Rème, H.; Fiethe, B.; Balsiger, H.

    2014-12-01

    The European Space Agency's Rosetta mission is the first to escort a comet over an extended time as the comet makes its way through the inner solar system. The ROSINA instrument suite consisting of a double focusing mass spectrometer, a time of flight mass spectrometer and a pressure sensor, will provide temporally and spatially resolved data on the comet's volatile inventory. The effect of spacecraft outgassing is well known and has been measured with the ROSINA instruments onboard Rosetta throughout the cruise phase. The flux of released neutral gas originating from the spacecraft cannot be distinguished from the cometary signal by the mass spectrometers and varies significantly with solar illumination conditions. For accurate interpretation of the instrument data, a good understanding of spacecraft outgassing is necessary. In this talk we present results simulating the spacecraft environment with the Adaptive Mesh Particle Simulator (AMPS) code. AMPS is a direct simulation monte carlo code that includes multiple species in a 3D adaptive mesh to describe a full scale model of the spacecraft environment. We use the triangulated surface model of the spacecraft to implement realistic outgassing rates for different areas on the surface and take shadowing effects in consideration. The resulting particle fluxes are compared to the measurements of the ROSINA experiment and implications for ROSINA measurements and data analysis are discussed. Spacecraft outgassing has implications for future space missions to rarefied atmospheres as it imposes a limit on the detection of various species.

  5. ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®

    NASA Astrophysics Data System (ADS)

    Damian, F.; Brun, E.

    2014-06-01

    ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.

  6. Supernova Spectrum Synthesis for 3D Composition Models with the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Thomas, Rollin

    2002-07-01

    newcommandBruteextttBrute Relying on spherical symmetry when modelling supernova spectra is clearly at best a good approximation. Recent polarization measurements, interesting features in flux spectra, and the clumpy textures of supernova remnants suggest that supernova envelopes are rife with fine structure. To account for this fine structure and create a complete picture of supernovae, new 3D explosion models will be forthcoming. To reconcile these models with observed spectra, 3D radiative transfer will be necessary. We propose a 3D Monte Carlo radiative transfer code, Brute, and improvements that will move it toward a fully self-consistent 3D transfer code. Spectroscopic HST observations of supernovae past, present and future will definitely benefit. Other 3D transfer problems of interest to HST users like AGNs will benefit from the techniques developed.

  7. Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code System.

    Energy Science and Technology Software Center (ESTSC)

    2013-06-24

    Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check author’s homepage for related information: http

  8. 3-D Direct Simulation Monte Carlo modeling of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Su, C.; Finklenburg, S.; Rubin, M.; Ip, W.; Keller, H.; Knollenberg, J.; Kührt, E.; Lai, I.; Skorov, Y.; Thomas, N.; Wu, J.; Chen, Y.

    2014-07-01

    After deep-space hibernation, ESA's Rosetta spacecraft has been successfully woken up and obtained the first images of comet 67P /Churyumov-Gerasimenko (C-G) in March 2014. It is expected that Rosetta will rendezvous with comet 67P and start to observe the nucleus and coma of the comet in the middle of 2014. As the comet approaches the Sun, a significant increase in activity is expected. Our aim is to understand the physical processes in the coma with the help of modeling in order to interpret the resulting measurements and establish observational and data analysis strategies. DSMC (Direct Simulation Monte Carlo) [1] is a very powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow [2,3]. Comparisons between DSMC and fluid techniques have also been performed to establish the limits of these techniques [2,4]. The drawback with 3D DSMC is that it is computationally highly intensive and thus time consuming. However, the performance can be dramatically increased with parallel computing on Graphic Processor Units (GPUs) [5]. We have already studied a case with comet 9P/Tempel 1 where the Deep Impact observations were used to define the shape of the nucleus and the outflow was simulated with the DSMC approach [6,7]. For comet 67P, we intend to determine the gas flow field in the innermost coma and the surface outgassing properties from analyses of the flow field, to investigate dust acceleration by gas drag, and to compare with observations (including time variability). The boundary conditions are implemented with a nucleus shape model [8] and thermal models which are based on the surface heat-balance equation. Several different parameter sets have been investigated. The calculations have been performed using the PDSC^{++} (Parallel Direct Simulation Monte Carlo) code [9] developed by Wu and his coworkers [10-12]. Simulation tasks can be accomplished within 24

  9. A Monte Carlo correction for the effect of Compton scattering in 3-D PET brain imaging

    SciTech Connect

    Levin, C.S.; Dahlbom, M.; Hoffman, E.J.

    1995-08-01

    A Monte Carlo simulation has been developed to simulate and correct for the effect of Compton scatter in 3-D acquired PET brain scans. The method utilizes the 3-D reconstructed image volume as the source intensity distribution for a photon-tracking Monte Carlo simulation. It is assumed that the number of events in each pixel of the image represents the isotope concentration at that location in the brain. The history of each annihilation photon`s interactions in the scattering medium is followed, and the sinograms for the scattered and unscattered photon pairs are generated in a simulated 3-D PET acquisition. The calculated scatter contribution is used to correct the original data set. The method is general and can be applied to any scanner configuration or geometry. In its current form the simulation requires 25 hours on a single Sparc 10 CPU when every pixel in a 15-plane, 128 x 128 pixel image volume is sampled, and less than 2 hours when 16 pixels (4 x 4) are grouped as a single pixel. Results of the correction applied to 3-D human and phantom studies are presented.

  10. A graphical user interface for calculation of 3D dose distribution using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chow, J. C. L.; Leung, M. K. K.

    2008-02-01

    A software graphical user interface (GUI) for calculation of 3D dose distribution using Monte Carlo (MC) simulation is developed using MATLAB. This GUI (DOSCTP) provides a user-friendly platform for DICOM CT-based dose calculation using EGSnrcMP-based DOSXYZnrc code. It offers numerous features not found in DOSXYZnrc, such as the ability to use multiple beams from different phase-space files, and has built-in dose analysis and visualization tools. DOSCTP is written completely in MATLAB, with integrated access to DOSXYZnrc and CTCREATE. The program function may be divided into four subgroups, namely, beam placement, MC simulation with DOSXYZnrc, dose visualization, and export. Each is controlled by separate routines. The verification of DOSCTP was carried out by comparing plans with different beam arrangements (multi-beam/photon arc) on an inhomogeneous phantom as well as patient CT between the GUI and Pinnacle3. DOSCTP was developed and verified with the following features: (1) a built-in voxel editor to modify CT-based DOSXYZnrc phantoms for research purposes; (2) multi-beam placement is possible, which cannot be achieved using the current DOSXYZnrc code; (3) the treatment plan, including the dose distributions, contours and image set can be exported to a commercial treatment planning system such as Pinnacle3 or to CERR using RTOG format for plan evaluation and comparison; (4) a built-in RTOG-compatible dose reviewer for dose visualization and analysis such as finding the volume of hot/cold spots in the 3D dose distributions based on a user threshold. DOSCTP greatly simplifies the use of DOSXYZnrc and CTCREATE, and offers numerous features that not found in the original user-code. Moreover, since phase-space beams can be defined and generated by the user, it is a particularly useful tool to carry out plans using specifically designed irradiators/accelerators that cannot be found in the Linac library of commercial treatment planning systems.

  11. Ultrafast vectorized multispin coding algorithm for the Monte Carlo simulation of the 3D Ising model

    NASA Astrophysics Data System (ADS)

    Wansleben, Stephan

    1987-02-01

    A new Monte Carlo algorithm for the 3D Ising model and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 3·3·3 and 192·192·192 with periodic boundary conditions, and is adjustable to various kinetic models. It simulates a canonical ensemble at given temperature generating a new random number for each spin flip. For the Metropolis transition probability the speed is 27 ns per updates on a two-pipe CDC Cyber 205 with 2 million words physical memory, i.e. 1.35 times the cycle time per update or 38 million updates per second.

  12. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  13. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  14. A Monte Carlo method for combined segregation and linkage analysis.

    PubMed Central

    Guo, S W; Thompson, E A

    1992-01-01

    We introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. PMID:1415253

  15. Development, validation, and implementation of a patient-specific Monte Carlo 3D internal dosimetry platform

    NASA Astrophysics Data System (ADS)

    Besemer, Abigail E.

    Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre

  16. Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems

    SciTech Connect

    Martinez, E.; Monasterio, P.R.; Marian, J.

    2011-02-20

    An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.

  17. Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems

    NASA Astrophysics Data System (ADS)

    Martínez, E.; Monasterio, P. R.; Marian, J.

    2011-02-01

    An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.

  18. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  19. OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics

    PubMed Central

    Liu, Yuming; Jacques, Steven L.; Azimipour, Mehdi; Rogers, Jeremy D.; Pashaie, Ramin; Eliceiri, Kevin W.

    2015-01-01

    Optimizing light delivery for optogenetics is critical in order to accurately stimulate the neurons of interest while reducing nonspecific effects such as tissue heating or photodamage. Light distribution is typically predicted using the assumption of tissue homogeneity, which oversimplifies light transport in heterogeneous brain. Here, we present an open-source 3D simulation platform, OptogenSIM, which eliminates this assumption. This platform integrates a voxel-based 3D Monte Carlo model, generic optical property models of brain tissues, and a well-defined 3D mouse brain tissue atlas. The application of this platform in brain data models demonstrates that brain heterogeneity has moderate to significant impact depending on application conditions. Estimated light density contours can show the region of any specified power density in the 3D brain space and thus can help optimize the light delivery settings, such as the optical fiber position, fiber diameter, fiber numerical aperture, light wavelength and power. OptogenSIM is freely available and can be easily adapted to incorporate additional brain atlases. PMID:26713200

  20. OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics.

    PubMed

    Liu, Yuming; Jacques, Steven L; Azimipour, Mehdi; Rogers, Jeremy D; Pashaie, Ramin; Eliceiri, Kevin W

    2015-12-01

    Optimizing light delivery for optogenetics is critical in order to accurately stimulate the neurons of interest while reducing nonspecific effects such as tissue heating or photodamage. Light distribution is typically predicted using the assumption of tissue homogeneity, which oversimplifies light transport in heterogeneous brain. Here, we present an open-source 3D simulation platform, OptogenSIM, which eliminates this assumption. This platform integrates a voxel-based 3D Monte Carlo model, generic optical property models of brain tissues, and a well-defined 3D mouse brain tissue atlas. The application of this platform in brain data models demonstrates that brain heterogeneity has moderate to significant impact depending on application conditions. Estimated light density contours can show the region of any specified power density in the 3D brain space and thus can help optimize the light delivery settings, such as the optical fiber position, fiber diameter, fiber numerical aperture, light wavelength and power. OptogenSIM is freely available and can be easily adapted to incorporate additional brain atlases. PMID:26713200

  1. 3D electro-thermal Monte Carlo study of transport in confined silicon devices

    NASA Astrophysics Data System (ADS)

    Mohamed, Mohamed Y.

    The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non

  2. Error propagation in the computation of volumes in 3D city models with the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Biljecki, F.; Ledoux, H.; Stoter, J.

    2014-11-01

    This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) developed two engines, one that generates random 3D buildings in CityGML in multiple LODs, and one that simulates acquisition errors to the geometry; (2) performed an error propagation analysis on volume computation based on the Monte Carlo method; and (3) worked towards establishing a framework for investigating error propagation in 3D GIS. The results of the experiments show that a comparatively small error in the geometry of a 3D city model may cause significant discrepancies in the computation of its volume. This has consequences for several applications, such as in estimation of energy demand and property taxes. The contribution of this work is twofold: this is the first error propagation analysis in 3D city modelling, and the novel approach and the engines that we have created can be used for analysing most of 3D GIS operations, supporting related research efforts in the future.

  3. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code

    Energy Science and Technology Software Center (ESTSC)

    1998-06-12

    TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less

  4. Self-assembly of ABC triblock copolymers under 3D soft confinement: a Monte Carlo study.

    PubMed

    Yan, Nan; Zhu, Yutian; Jiang, Wei

    2016-01-21

    Under three-dimensional (3D) soft confinement, block copolymers can self-assemble into unique nanostructures that cannot be fabricated in an un-confined space. Linear ABC triblock copolymers containing three chemically distinct polymer blocks possess relatively complex chain architecture, which can be a promising candidate for the 3D confined self-assembly. In the current study, the Monte Carlo technique was applied in a lattice model to study the self-assembly of ABC triblock copolymers under 3D soft confinement, which corresponds to the self-assembly of block copolymers confined in emulsion droplets. We demonstrated how to create various nanostructures by tuning the symmetry of ABC triblock copolymers, the incompatibilities between different block types, and solvent properties. Besides common pupa-like and bud-like nanostructures, our simulations predicted various unique self-assembled nanostructures, including a striped-pattern nanoparticle with intertwined A-cages and C-cages, a pyramid-like nanoparticle with four Janus B-C lamellae adhered onto its four surfaces, an ellipsoidal nanoparticle with a dumbbell-like A-core and two Janus B-C lamellae and a Janus B-C ring surrounding the A-core, a spherical nanoparticle with a A-core and a helical Janus B-C stripe around the A-core, a cubic nanoparticle with a cube-shape A-core and six Janus B-C lamellae adhered onto the surfaces of the A-cube, and a spherical nanoparticle with helical A, B and C structures, from the 3D confined self-assembly of ABC triblock copolymers. Moreover, the formation mechanisms of some typical nanostructures were also examined by the variations of the contact numbers with time and a series of snapshots at different Monte Carlo times. It is found that ABC triblock copolymers usually aggregate into a loose aggregate at first, and then the microphase separation between A, B and C blocks occurs, resulting in the formation of various nanostructures. PMID:26571300

  5. Improvement of 3d Monte Carlo Localization Using a Depth Camera and Terrestrial Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kanai, S.; Hatakeyama, R.; Date, H.

    2015-05-01

    Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL) has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS) or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  6. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  7. A Post-Monte-Carlo Sensitivity Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  8. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  9. Asymptotic analysis of spatial discretizations in implicit Monte Carlo

    SciTech Connect

    Densmore, Jeffery D

    2008-01-01

    We perform an asymptotic analysis of spatial discretizations in Implicit Monte Carlo (IMC). We consider two asymptotic scalings: one that represents a time step that resolves the mean-free time, and one that corresponds to a fixed, optically large time step. We show that only the latter scaling results in a valid spatial discretization of the proper diffusion equation, and thus we conclude that IMC only yields accurate solutions when using optically large spatial cells if time steps are also optically large, We demonstrate the validity of our analysis with a set of numerical examples.

  10. Asymptotic analysis of spatial discretizations in implicit Monte Carlo

    SciTech Connect

    Densmore, Jeffery D

    2009-01-01

    We perform an asymptotic analysis of spatial discretizations in Implicit Monte Carlo (IMC). We consider two asymptotic scalings: one that represents a time step that resolves the mean-free time, and one that corresponds to a fixed, optically large time step. We show that only the latter scaling results in a valid spatial discretization of the proper diffusion equation, and thus we conclude that IMC only yields accurate solutions when using optically large spatial cells if time steps are also optically large. We demonstrate the validity of our analysis with a set of numerical examples.

  11. Variance reduction in Monte Carlo analysis of rarefied gas diffusion

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.

  12. Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124

    NASA Astrophysics Data System (ADS)

    Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.

    2015-03-01

    Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.

  13. Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system

    NASA Astrophysics Data System (ADS)

    Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph

    2016-03-01

    A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.

  14. Diffusion Monte Carlo for Accurate Dissociation Energies of 3d Transition Metal Containing Molecules.

    PubMed

    Doblhoff-Dier, Katharina; Meyer, Jörg; Hoggan, Philip E; Kroes, Geert-Jan; Wagner, Lucas K

    2016-06-14

    Transition metals and transition metal compounds are important to catalysis, photochemistry, and many superconducting systems. We study the performance of diffusion Monte Carlo (DMC) applied to transition metal containing dimers (TMCDs) using single-determinant Slater-Jastrow trial wavefunctions and investigate the possible influence of the locality and pseudopotential errors. We find that the locality approximation can introduce nonsystematic errors of up to several tens of kilocalories per mole in the absolute energy of Cu and CuH if Ar or Mg core pseudopotentials (PPs) are used for the 3d transition metal atoms. Even for energy differences such as binding energies, errors due to the locality approximation can be problematic if chemical accuracy is sought. The use of the Ne core PPs developed by Burkatzki et al. (J. Chem. Phys. 2008, 129, 164115), the use of linear energy minimization rather than unreweighted variance minimization for the optimization of the Jastrow function, and the use of large Jastrow parametrizations reduce the locality errors. In the second section of this article, we study the general performance of DMC for 3d TMCDs using a database of binding energies of 20 TMCDs, for which comparatively accurate experimental data is available. Comparing our DMC results to these data for our results that compare best with experiment, we find a mean unsigned error (MUE) of 4.5 kcal/mol. This compares well with the achievable accuracy in CCSDT(2)Q (MUE = 4.6 kcal/mol) and the best all-electron DFT results (MUE = 4.5 kcal/mol) for the same set of systems (Truhlar et al. J. Chem. Theory Comput. 2015, 11, 2036-2052). The mean errors in DMC depend less on the exchange-correlation functionals used to generate the trial wavefunction than the corresponding mean errors in the underlying DFT calculations. Furthermore, the QMC results obtained for each molecule individually vary less with the functionals used. These observations are relevant for systems such as

  15. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  16. Monte Carlo Test Assembly for Item Pool Analysis and Extension

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2005-01-01

    A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…

  17. A fast vectorized multispin coding algorithm for 3D Monte Carlo simulations using Kawasaki spin-exchange dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, M. Q.

    1989-09-01

    A new Monte Carlo algorithm for 3D Kawasaki spin-exchange simulations and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 4×4×4 and 256×L2×L3 ((L2+2)(L3+4)/4⩽65535) and periodic boundary conditions. It is adjustable to various kinetic models in which the total magnetization is conserved. Maximum speed on 10 million steps per second can be reached for 3-D Ising model with Metropolis rate.

  18. RMC - A Monte Carlo code for reactor physics analysis

    SciTech Connect

    Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, A.; Yu, J.; Sun, J.; Fan, X.; Yu, G.

    2013-07-01

    A new Monte Carlo neutron transport code RMC has been being developed by Department of Engineering Physics, Tsinghua University, Beijing as a tool for reactor physics analysis on high-performance computing platforms. To meet the requirements of reactor analysis, RMC now has such functions as criticality calculation, fixed-source calculation, burnup calculation and kinetics simulations. Some techniques for geometry treatment, new burnup algorithm, source convergence acceleration, massive tally and parallel calculation, and temperature dependent cross sections processing are researched and implemented in RMC to improve the efficiency. Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances. (authors)

  19. Iterative Monte Carlo analysis of spin-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  20. Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core

    NASA Astrophysics Data System (ADS)

    Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier

    2014-06-01

    As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard

  1. Adjoint Monte Carlo method for prostate external photon beam treatment planning: an application to 3D patient anatomy

    NASA Astrophysics Data System (ADS)

    Wang, Brian; Goldstein, Moshe; Xu, X. George; Sahoo, Narayan

    2005-03-01

    Recently, the theoretical framework of the adjoint Monte Carlo (AMC) method has been developed using a simplified patient geometry. In this study, we extended our previous work by applying the AMC framework to a 3D anatomical model called VIP-Man constructed from the Visible Human images. First, the adjoint fluxes for the prostate (PTV) and rectum and bladder (organs at risk (OARs)) were calculated on a spherical surface of 1 m radius, centred at the centre of gravity of PTV. An importance ratio, defined as the PTV dose divided by the weighted OAR doses, was calculated for each of the available beamlets to select the beam angles. Finally, the detailed doses in PTV and OAR were calculated using a forward Monte Carlo simulation to include the electron transport. The dose information was then used to generate dose volume histograms (DVHs). The Pinnacle treatment planning system was also used to generate DVHs for the 3D plans with beam angles obtained from the AMC (3D-AMC) and a standard six-field conformal radiation therapy plan (3D-CRT). Results show that the DVHs for prostate from 3D-AMC and the standard 3D-CRT are very similar, showing that both methods can deliver prescribed dose to the PTV. A substantial improvement in the DVHs for bladder and rectum was found for the 3D-AMC method in comparison to those obtained from 3D-CRT. However, the 3D-AMC plan is less conformal than the 3D-CRT plan because only bladder, rectum and PTV are considered for calculating the importance ratios. Nevertheless, this study clearly demonstrated the feasibility of the AMC in selecting the beam directions as a part of a treatment planning based on the anatomical information in a 3D and realistic patient anatomy.

  2. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger

    2008-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  3. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael

    2007-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  4. Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques

    Energy Science and Technology Software Center (ESTSC)

    2014-12-01

    Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random numbermore » and for measuring the time of simulation.« less

  5. Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques

    SciTech Connect

    2014-12-01

    Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random number and for measuring the time of simulation.

  6. Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code

    SciTech Connect

    Blaise, P.; Colomba, A.

    2012-07-01

    The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)

  7. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry

    PubMed Central

    Adamson, Justus; Newton, Joseph; Yang, Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana

    2012-01-01

    Purpose: To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T&O) applicator using Monte Carlo calculation and 3D dosimetry. Methods: For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 × 109 photon histories from a 137Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for 137Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE® dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5–8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5° increments, and a 3D distribution was reconstructed with a (0.05 cm)3 isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T&O implant plan. Results: The systematic difference in bucket angle relative to the nominal ovoid angle (105°) was 3.1°–4.7°. A systematic difference in bucket angle of 1°, 5°, and 10° caused a 1% ± 0.1%, 1.7% ± 0.4%, and 2.6% ± 0.7% increase in rectal dose, respectively, with smaller effect to dose to

  8. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry

    SciTech Connect

    Adamson, Justus; Newton, Joseph; Yang Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana

    2012-07-15

    Purpose: To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T and O) applicator using Monte Carlo calculation and 3D dosimetry. Methods: For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 Multiplication-Sign 10{sup 9} photon histories from a {sup 137}Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for {sup 137}Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE{sup Registered-Sign} dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5 Degree-Sign increments, and a 3D distribution was reconstructed with a (0.05 cm){sup 3} isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T and O implant plan. Results: The systematic difference in bucket angle relative to the nominal ovoid angle (105 Degree-Sign ) was 3.1 Degree-Sign -4.7 Degree-Sign . A systematic difference in bucket angle of 1 Degree-Sign , 5 Degree-Sign , and

  9. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  10. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    NASA Astrophysics Data System (ADS)

    Ilic, Radovan D.; Spasic-Jokic, Vesna; Belicev, Petar; Dragovic, Milos

    2005-03-01

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.

  11. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data.

    PubMed

    Ilić, Radovan D; Spasić-Jokić, Vesna; Belicev, Petar; Dragović, Milos

    2005-03-01

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour. PMID:15798273

  12. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  13. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  14. Uncertainty analysis of penicillin V production using Monte Carlo simulation.

    PubMed

    Biwer, Arno; Griffith, Steve; Cooney, Charles

    2005-04-20

    Uncertainty and variability affect economic and environmental performance in the production of biotechnology and pharmaceutical products. However, commercial process simulation software typically provides analysis that assumes deterministic rather than stochastic process parameters and thus is not capable of dealing with the complexities created by variance that arise in the decision-making process. Using the production of penicillin V as a case study, this article shows how uncertainty can be quantified and evaluated. The first step is construction of a process model, as well as analysis of its cost structure and environmental impact. The second step is identification of uncertain variables and determination of their probability distributions based on available process and literature data. Finally, Monte Carlo simulations are run to see how these uncertainties propagate through the model and affect key economic and environmental outcomes. Thus, the overall variation of these objective functions are quantified, the technical, supply chain, and market parameters that contribute most to the existing variance are identified and the differences between economic and ecological evaluation are analyzed. In our case study analysis, we show that final penicillin and biomass concentrations in the fermenter have the highest contribution to variance for both unit production cost and environmental impact. The penicillin selling price dominates return on investment variance as well as the variance for other revenue-dependent parameters. PMID:15742389

  15. Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2012-06-15

    Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.

  16. Spectrum simulation of rough and nanostructured targets from their 2D and 3D image by Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François; Chicoine, Martin

    2016-03-01

    Corteo is a program that implements Monte Carlo (MC) method to simulate ion beam analysis (IBA) spectra of several techniques by following the ions trajectory until a sufficiently large fraction of them reach the detector to generate a spectrum. Hence, it fully accounts for effects such as multiple scattering (MS). Here, a version of Corteo is presented where the target can be a 2D or 3D image. This image can be derived from micrographs where the different compounds are identified, therefore bringing extra information into the solution of an IBA spectrum, and potentially significantly constraining the solution. The image intrinsically includes many details such as the actual surface or interfacial roughness, or actual nanostructures shape and distribution. This can for example lead to the unambiguous identification of structures stoichiometry in a layer, or at least to better constraints on their composition. Because MC computes in details the trajectory of the ions, it simulates accurately many of its aspects such as ions coming back into the target after leaving it (re-entry), as well as going through a variety of nanostructures shapes and orientations. We show how, for example, as the ions angle of incidence becomes shallower than the inclination distribution of a rough surface, this process tends to make the effective roughness smaller in a comparable 1D simulation (i.e. narrower thickness distribution in a comparable slab simulation). Also, in ordered nanostructures, target re-entry can lead to replications of a peak in a spectrum. In addition, bitmap description of the target can be used to simulate depth profiles such as those resulting from ion implantation, diffusion, and intermixing. Other improvements to Corteo include the possibility to interpolate the cross-section in angle-energy tables, and the generation of energy-depth maps.

  17. Time series analysis of Monte Carlo neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Nease, Brian Robert

    A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.

  18. Fast Monte Carlo for ion beam analysis simulations

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François

    2008-04-01

    A Monte Carlo program for the simulation of ion beam analysis data is presented. It combines mainly four features: (i) ion slowdown is computed separately from the main scattering/recoil event, which is directed towards the detector. (ii) A virtual detector, that is, a detector larger than the actual one can be used, followed by trajectory correction. (iii) For each collision during ion slowdown, scattering angle components are extracted form tables. (iv) Tables of scattering angle components, stopping power and energy straggling are indexed using the binary representation of floating point numbers, which allows logarithmic distribution of these tables without the computation of logarithms to access them. Tables are sufficiently fine-grained that interpolation is not necessary. Ion slowdown computation thus avoids trigonometric, inverse and transcendental function calls and, as much as possible, divisions. All these improvements make possible the computation of 107 collisions/s on current PCs. Results for transmitted ions of several masses in various substrates are well comparable to those obtained using SRIM-2006 in terms of both angular and energy distributions, as long as a sufficiently large number of collisions is considered for each ion. Examples of simulated spectrum show good agreement with experimental data, although a large detector rather than the virtual detector has to be used to properly simulate background signals that are due to plural collisions. The program, written in standard C, is open-source and distributed under the terms of the GNU General Public License.

  19. Monte Carlo analysis of localization errors in magnetoencephalography

    SciTech Connect

    Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.

    1989-01-01

    In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.

  20. Quasi Monte Carlo-based Isotropic Distribution of Gradient Directions for Improved Reconstruction Quality of 3D EPR Imaging

    PubMed Central

    Ahmad, Rizwan; Deng, Yuanmu; Vikram, Deepti S.; Clymer, Bradley; Srinivasan, Parthasarathy; Zweier, Jay L.; Kuppusamy, Periannan

    2007-01-01

    In continuous wave (CW) electron paramagnetic resonance imaging (EPRI), high quality of reconstructed image along with fast and reliable data acquisition is highly desirable for many biological applications. An accurate representation of uniform distribution of projection data is necessary to ensure high reconstruction quality. The current techniques for data acquisition suffer from nonuniformities or local anisotropies in the distribution of projection data and present a poor approximation of a true uniform and isotropic distribution. In this work, we have implemented a technique based on Quasi-Monte Carlo method to acquire projections with more uniform and isotropic distribution of data over a 3D acquisition space. The proposed technique exhibits improvements in the reconstruction quality in terms of both mean-square-error and visual judgment. The effectiveness of the suggested technique is demonstrated using computer simulations and 3D EPRI experiments. The technique is robust and exhibits consistent performance for different object configurations and orientations. PMID:17095271

  1. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Ulmer, W.; Pyyry, J.; Kaissl, W.

    2005-04-01

    Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes <=5.5 cm2 and densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.

  2. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    NASA Astrophysics Data System (ADS)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum

  3. Algebraic Monte Carlo precedure reduces statistical analysis time and cost factors

    NASA Technical Reports Server (NTRS)

    Africano, R. C.; Logsdon, T. S.

    1967-01-01

    Algebraic Monte Carlo procedure statistically analyzes performance parameters in large, complex systems. The individual effects of input variables can be isolated and individual input statistics can be changed without having to repeat the entire analysis.

  4. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  5. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  6. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    SciTech Connect

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  7. Analytical band Monte Carlo analysis of electron transport in silicene

    NASA Astrophysics Data System (ADS)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  8. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  9. Comparison of basis functions for 3D PET reconstruction using a Monte Carlo system matrix

    NASA Astrophysics Data System (ADS)

    Cabello, Jorge; Rafecas, Magdalena

    2012-04-01

    In emission tomography, iterative statistical methods are accepted as the reconstruction algorithms that achieve the best image quality. The accuracy of these methods relies partly on the quality of the system response matrix (SRM) that characterizes the scanner. The more physical phenomena included in the SRM, the higher the SRM quality, and therefore higher image quality is obtained from the reconstruction process. High-resolution small animal scanners contain as many as 103-104 small crystal pairs, while the field of view (FOV) is divided into hundreds of thousands of small voxels. These two characteristics have a significant impact on the number of elements to be calculated in the SRM. Monte Carlo (MC) methods have gained popularity as a way of calculating the SRM, due to the increased accuracy achievable, at the cost of introducing some statistical noise and long simulation times. In the work presented here the SRM is calculated using MC methods exploiting the cylindrical symmetries of the scanner, significantly reducing the simulation time necessary to calculate a high statistical quality SRM and the storage space necessary. The use of cylindrical symmetries makes polar voxels a convenient basis function. Alternatively, spherically symmetric basis functions result in improved noise properties compared to cubic and polar basis functions. The quality of reconstructed images using polar voxels, spherically symmetric basis functions on a polar grid, cubic voxels and post-reconstruction filtered polar and cubic voxels is compared from a noise and spatial resolution perspective. This study demonstrates that polar voxels perform as well as cubic voxels, reducing the simulation time necessary to calculate the SRM and the disk space necessary to store it. Results showed that spherically symmetric functions outperform polar and cubic basis functions in terms of noise properties, at the cost of slightly degraded spatial resolution, larger SRM file size and longer

  10. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  11. Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis

    SciTech Connect

    Heo, W.; Kim, W.; Kim, Y.; Yun, S.

    2013-07-01

    A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)

  12. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  13. Validation of fluence-based 3D IMRT dose reconstruction on a heterogeneous anthropomorphic phantom using Monte Carlo simulation.

    PubMed

    Nakaguchi, Yuji; Ono, Takeshi; Maruyama, Masato; Nagasue, Nozomu; Shimohigashi, Yoshinobu; Kai, Yudai

    2015-01-01

    In this study, we evaluated the performance of a three-dimensional (3D) dose verification system, COMPASS version 3, which has a dedicated beam models and dose calculation engine. It was possible to reconstruct the 3D dose distributions in patient anatomy based on the measured fluence using the MatriXX 2D array. The COMPASS system was compared with Monte Carlo simulation (MC), glass rod dosimeter (GRD), and 3DVH, using an anthropomorphic phantom for intensity-modulated radiation therapy (IMRT) dose verification in clinical neck cases. The GRD measurements agreed with the MC within 5% at most measurement points. In addition, most points for COMPASS and 3DVH also agreed with the MC within 5%. The COMPASS system showed better results than 3DVH for dose profiles due to individual adjustments, such as beam modeling for each linac. Regarding the dose-volume histograms, there were no large differences between MC, analytical anisotropic algorithm (AAA) in Eclipse treatment planning system (TPS), 3DVH, and the COMPASS system. However, AAA underestimated the dose to the clinical target volume and Rt-Parotid slightly. This is because AAA has some problems with dose calculation accuracy. Our results indicated that the COMPASS system offers highly accurate 3D dose calculation for clinical IMRT quality assurance. Also, the COMPASS system will be useful as a commissioning tool in routine clinical practice for TPS. PMID:25679177

  14. Active neutron multiplicity analysis and Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Krick, M. S.; Ensslin, N.; Langner, D. G.; Miller, M. C.; Siebelist, R.; Stewart, J. E.; Ceo, R. N.; May, P. K.; Collins, L. L., Jr.

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined.

  15. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  16. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    PubMed Central

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-01-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477

  17. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    NASA Astrophysics Data System (ADS)

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-12-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed.

  18. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  19. 3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.

    PubMed

    Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E

    2016-05-01

    The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. PMID:26953978

  20. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  1. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    SciTech Connect

    Li, Ming; Kang, Zhan; Huang, Xiaobo

    2015-08-28

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  2. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Li, Ming; Huang, Xiaobo; Kang, Zhan

    2015-08-01

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  3. 3D polymer gel dosimetry and Geant4 Monte Carlo characterization of novel needle based X-ray source

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Sozontov, E.; Safronov, V.; Gutman, G.; Strumban, E.; Jiang, Q.; Li, S.

    2010-11-01

    In the recent years, there have been a few attempts to develop a low energy x-ray radiation sources alternative to conventional radioisotopes used in brachytherapy. So far, all efforts have been centered around the intent to design an interstitial miniaturized x-ray tube. Though direct irradiation of tumors looks very promising, the known insertable miniature x-ray tubes have many limitations: (a) difficulties with focusing and steering the electron beam to the target; (b)necessity to cool the target to increase x-ray production efficiency; (c)impracticability to reduce the diameter of the miniaturized x-ray tube below 4mm (the requirement to decrease the diameter of the x-ray tube and the need to have a cooling system for the target have are mutually exclusive); (c) significant limitations in changing shape and energy of the emitted radiation. The specific aim of this study is to demonstrate the feasibility of a new concept for an insertable low-energy needle x-ray device based on simulation with Geant4 Monte Carlo code and to measure the dose rate distribution for low energy (17.5 keV) x-ray radiation with the 3D polymer gel dosimetry.

  4. A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code

    NASA Astrophysics Data System (ADS)

    Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.

    2013-12-01

    The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness, , and standard deviation of COT, were 3.0 and 4.3 for pristine case and 8.5 and 7.4 for polluted case, respectively. In the MIDPM method, we first construct a library of pair of observed vertical profiles from active sensors and collocated imager products at the nadir footprint, i.e. spectral imager radiances, cloud optical thickness (COT), effective particle radius (RE) and cloud top temperature (Tc). We then select a

  5. Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. G.; Normand, E.; Wilcox, A. D.

    1972-01-01

    In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.

  6. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    SciTech Connect

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  7. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    ERIC Educational Resources Information Center

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  8. Determining the Number of Principal Components to Retain via Parallel Analysis: Alternatives to Monte Carlo Analyses.

    ERIC Educational Resources Information Center

    Lautenschlager, Gary J.

    The parallel analysis method for determining the number of components to retain in a principal components analysis has received a recent resurgence of support and interest. However, researchers and practitioners desiring to use this criterion have been hampered by the required Monte Carlo analyses needed to develop the criteria. Two recent…

  9. A Monte Carlo Study of Recovery of Weak Factor Loadings in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Ximenez, Carmen

    2006-01-01

    The recovery of weak factors has been extensively studied in the context of exploratory factor analysis. This article presents the results of a Monte Carlo simulation study of recovery of weak factor loadings in confirmatory factor analysis under conditions of estimation method (maximum likelihood vs. unweighted least squares), sample size,…

  10. Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis

    ERIC Educational Resources Information Center

    Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.

    2010-01-01

    The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…

  11. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  12. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  13. The Null Space Monte Carlo Uncertainty Analysis of Heterogeneity for Preferential Flow Simulation

    NASA Astrophysics Data System (ADS)

    Ghasemizade, M.; Radny, D.; Schirmer, M.

    2014-12-01

    Preferential flow paths can have a huge impact on the amount and time of runoff generation, particularly in areas where subsurface flow dominates this process. In order to simulate preferential flow mechanisms, many different approaches have been suggested. However, the efficiency of such approaches are rarely investigated in a predictive sense. The main reason is that the models which are used to simulate preferential flows require many parameters. This can lead to a dramatic increase of model run times, especially in the context of highly nonlinear models which themselves are demanding. We attempted in this research to simulate the daily recharge values of a weighing lysimeter, including preferential flows, with the 3-D physically based model HydroGeoSphere. To accomplish that, we used the matrix pore concept with varying hydraulic conductivities within the lysimeter to represent heterogeneity. It was assumed that spatially correlated heterogeneity is the main driver of triggering preferential flow paths. In order to capture the spatial distribution of hydraulic conductivity values we used pilot points and geostatistical model structures. Since hydraulic conductivity values at each pilot point are functioning as parameters, the model is a highly parameterized one. Due to this fact, we used the robust and newly developed method of null space Monte Carlo for analyzing the uncertainty of the model outputs. Results of the uncertainty analysis show that the method of pilot points is reliable in order to represent preferential flow paths.

  14. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  15. Monte Carlo verification of point kinetics for safety analysis of nuclear reactors

    SciTech Connect

    Valentine, T.E.; Mihalczo, J.T.

    1995-06-01

    Monte Carlo neutron transport methods can be used to verify the applicability of point kinetics for safety analysis of nuclear reactors. KENO-NR was used to obtain the transfer function of the Advanced Neutron Source reactor and the time delay between the core power production and the external detectors, a parameter of interest to the safety systems design. The good agreement between the Monte Carlo generated transfer function and the point kinetics transfer function validates that the uncommon ANS geometry does not preclude the use of point kinetics in the frequency range that was investigated. Various features of the power spectral densities also demonstrated the applicability of point kinetics. The time delay was obtained from the cross-power spectral density (CPSD) and is {approximately}15 ms. These analyses show that frequency analysis can be used experimentally to investigate the validity of the use of point kinetics models in critical experiments or zero power testing of reactors.

  16. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D. Warsa, James S. Lowrie, Robert B.

    2010-05-20

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  17. Monte Carlo analysis of voxel resolution of off-axially distributed image sensing system

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Piao, Yongri; Cho, Myungjin

    2016-03-01

    In this paper, we present a generalization framework to analyze the effect of voxel resolution on Monte Carlo simulation for off-axially distributed image sensing (ODIS) system under fixed resource constraints. Our framework can evaluate the performance of ODIS systems based on various sensing parameters such as the slanted angle between the optical axis and the moving direction, the number of cameras, the pixel size, the distance between the optical axis and the point source plane and so on. We carry out Monte Carlo simulations based on this framework to evaluate ODIS system performance as a function of sensing parameters. To the best of our knowledge, this is the first report on quantitative analysis of ODIS systems under fixed resource constraints.

  18. Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  19. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  20. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-11-15

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically.

  1. Sintering analysis of sub-micron-sized nickel powders: Kinetic Monte Carlo simulation verified by FIB-SEM reconstruction

    NASA Astrophysics Data System (ADS)

    Hara, Shotaro; Ohi, Akihiro; Shikazono, Naoki

    2015-02-01

    Since sintering of sub-micron-sized particles is a critical phenomenon affecting the electrochemical performance and reliability of solid oxide fuel cell systems, a better understanding of this microstructure-related process is of great importance. In this study, we show that kinetic Potts Monte Carlo modeling is capable of quantitatively predicting the three-dimensional (3D) microstructure evolution over an entire stage of nickel sintering at the sub-micron scale. This is achieved through direct comparison of simulation results and 3D microstructural analysis using focused ion beam-scanning electron microscopy. We show that grain boundary diffusion is the dominant mechanism on densification, while surface diffusion has an impact on the coarsening during sub-micron scale sintering, only acting as one of the multiple mechanisms of sintering.

  2. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images

    NASA Astrophysics Data System (ADS)

    Botta, F.; Mairani, A.; Hobbs, R. F.; Vergara Gil, A.; Pacilio, M.; Parodi, K.; Cremonesi, M.; Coca Pérez, M. A.; Di Dia, A.; Ferrari, M.; Guerriero, F.; Battistoni, G.; Pedroli, G.; Paganelli, G.; Torres Aroche, L. A.; Sgouros, G.

    2013-11-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3-4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  3. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  4. MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem

    SciTech Connect

    Kelly, D. J.; Sutton, T. M.; Wilson, S. C.

    2012-07-01

    Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)

  5. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial

  6. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    SciTech Connect

    Fallahpoor, M; Abbasi, M; Sen, A; Parach, A; Kalantari, F

    2015-06-15

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  7. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    SciTech Connect

    Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  8. Monte Carlo Neutronics and Thermal Hydraulics Analysis of Reactor Cores with Multilevel Grids

    NASA Astrophysics Data System (ADS)

    Bernnat, W.; Mattes, M.; Guilliard, N.; Lapins, J.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2014-06-01

    Power reactors are composed of assemblies with fuel pin lattices or other repeated structures with several grid levels, which can be modeled in detail by Monte Carlo neutronics codes such as MCNP6 using corresponding lattice options, even for large cores. Except for fresh cores at beginning of life, there is a varying material distribution due to burnup in the different fuel pins. Additionally, for power states the fuel and moderator temperatures and moderator densities vary according to the power distribution and cooling conditions. Therefore, a coupling of the neutronics code with a thermal hydraulics code is necessary. Depending on the level of detail of the analysis, a very large number of cells with different materials and temperatures must be regarded. The assignment of different material properties to all elements of a multilevel grid is very elaborate and may exceed program limits if the standard input procedure is used. Therefore, an internal assignment is used which overrides uniform input parameters. The temperature dependency of continuous energy cross sections, probability tables for the unresolved resonance region and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. The method is applied with MCNP6 and proven for several full core reactor models. For the coupling of MCNP6 with thermal hydraulics appropriate interfaces were developed for the GRS system code ATHLET for liquid coolant and the IKE thermal hydraulics code ATTICA-3D for gaseous coolant. Examples will be shown for different applications for PWRs with square and hexagonal lattices, fast reactors (SFR) with hexagonal lattices and HTRs with pebble bed and prismatic lattices.

  9. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  10. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  11. 3D Direct Simulation Monte Carlo Modelling of the Inner Gas Coma of Comet 67P/Churyumov-Gerasimenko: A Parameter Study

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.

    2016-03-01

    Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.

  12. Monte-Carlo Analysis of the Flavour Changing Neutral Current B \\to Gamma at Babar

    SciTech Connect

    Smith, D.

    2001-09-01

    The main theme of this thesis is a Monte-Carlo analysis of the rare Flavour Changing Neutral Current (FCNC) decay b→sγ. The analysis develops techniques that could be applied to real data, to discriminate between signal and background events in order to make a measurement of the branching ratio of this rare decay using the BaBar detector. Also included in this thesis is a description of the BaBar detector and the work I have undertaken in the development of the electronic data acquisition system for the Electromagnetic calorimeter (EMC), a subsystem of the BaBar detector.

  13. Nuclear spectroscopy for in situ soil elemental analysis: Monte Carlo simulations

    SciTech Connect

    Wielopolski L.; Doron, O.

    2012-07-01

    We developed a model to simulate a novel inelastic neutron scattering (INS) system for in situ non-destructive analysis of soil using standard Monte Carlo Neutron Photon (MCNP5a) transport code. The volumes from which 90%, 95%, and 99% of the total signal are detected were estimated to be 0.23 m{sup 3}, 0.37 m{sup 3}, and 0.79 m{sup 3}, respectively. Similarly, we assessed the instrument's sampling footprint and depths. In addition we discuss the impact of the carbon's depth distribution on sampled depth.

  14. Markov chain Monte Carlo linkage analysis of a complex qualitative phenotype.

    PubMed

    Hinrichs, A; Lin, J H; Reich, T; Bierut, L; Suarez, B K

    1999-01-01

    We tested a new computer program, LOKI, that implements a reversible jump Markov chain Monte Carlo (MCMC) technique for segregation and linkage analysis. Our objective was to determine whether this software, designed for use with continuously distributed phenotypes, has any efficacy when applied to the discrete disease states of the simulated data from the Mordor data from GAW Problem 1. Although we were able to identify the genomic location for two of the three quantitative trait loci by repeated application of the software, the MCMC sampler experienced significant mixing problems indicating that the method, as currently formulated in LOKI, was not suitable for the discrete phenotypes in this data set. PMID:10597502

  15. Microlens assembly error analysis for light field camera based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  16. Bayesian Monte Carlo analysis applied to regional-scale inverse emission modeling for reactive trace gases

    NASA Astrophysics Data System (ADS)

    Deguillaume, L.; Beekmann, M.; Menut, L.

    2007-01-01

    The purpose of this article is inverse modeling of emissions at regional scale for photochemical applications. The study is performed for the Ile-de-France region over a two summers (1998 and 1999) period. This area represents an ideal framework since concentrated anthropogenic emissions in the Paris region frequently lead to the formation of urban plumes. The inversion method is based on Bayesian Monte Carlo analysis applied to a regional-scale chemistry transport model, CHIMERE. This method consists in performing a large number of successive simulations with the same model but with a distinct set of model input parameters at each time. Then a posteriori weights are attributed to individual Monte Carlo simulations by comparing them with observations from the AIRPARIF network: urban NO and O3 concentrations and rural O3 concentrations around the Paris area. For both NO and O3 measurements, observations used for constraining Monte Carlo simulations are additionally averaged over the time period considered for analysis. The observational constraints strongly reduce the a priori uncertainties in anthropogenic NOx and volatile organic compounds (VOC) emissions: (1) The a posteriori probability density function (pdf) for NOx emissions is not modified in its average, but the standard deviation is decreased to around 20% (40% for the a priori one). (2) VOC emissions are enhanced (+16%) in the a posteriori pdf's with a standard deviation around 30% (40% for the a priori one). Uncertainties in the simulated urban NO, urban O3, and O3 production within the plume are reduced by a factor of 3.2, 2.4, and 1.7, respectively.

  17. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  18. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples

    NASA Astrophysics Data System (ADS)

    Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.

    2015-08-01

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.

  19. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples.

    PubMed

    Furuta, T; Maeyama, T; Ishikawa, K L; Fukunishi, N; Fukasaku, K; Takagi, S; Noda, S; Himeno, R; Hayashi, S

    2015-08-21

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning. PMID:26266894

  20. Uncertainty analysis in environmental radioactivity measurements using the Monte Carlo code MCNP5

    NASA Astrophysics Data System (ADS)

    Gallardo, S.; Querol, A.; Ortiz, J.; Ródenas, J.; Verdú, G.; Villanueva, J. F.

    2015-11-01

    High Purity Germanium (HPGe) detectors are widely used for environmental radioactivity measurements due to their excellent energy resolution. Monte Carlo (MC) codes are a useful tool to complement experimental measurements in calibration procedures at the laboratory. However, the efficiency curve of the detector can vary due to uncertainties associated with measurements. These uncertainties can be classified into some categories: geometrical parameters of the measurement (distance source-detector, volume of the source), properties of the radiation source (radionuclide activity, branching ratio), and detector characteristics (Ge dead layer, active volume, end cap thickness). The Monte Carlo simulation can be also affected by other kind of uncertainties mainly related to cross sections and to the calculation itself. Normally, all these uncertainties are not well known and it required a deep analysis to determine their effect on the detector efficiency. In this work, the Noether-Wilks formula is used to carry out the uncertainty analysis. A Probability Density Function (PDF) is assigned to each variable involved in the sampling process. The size of the sampling is determined from the characteristics of the tolerance intervals by applying the Noether-Wilks formula. Results of the analysis transform the efficiency curve into a region of possible values into the tolerance intervals. Results show a good agreement between experimental measurements and simulations for two different matrices (water and sand).

  1. Monte Carlo analysis of uncertainties in the Netherlands greenhouse gas emission inventory for 1990-2004

    NASA Astrophysics Data System (ADS)

    Ramírez, Andrea; de Keizer, Corry; Van der Sluijs, Jeroen P.; Olivier, Jos; Brandes, Laurens

    This paper presents an assessment of the value added of a Monte Carlo analysis of the uncertainties in the Netherlands inventory of greenhouse gases over a Tier 1 analysis. It also examines which parameters contributed the most to the total emission uncertainty and identified areas of high priority for the further improvement of the accuracy and quality of the inventory. The Monte Carlo analysis resulted in an uncertainty range in total GHG emissions of 4.1% in 2004 and 5.4% in 1990 (with LUCF) and 5.3% (in 1990) and 3.9% (in 2004) for GHG emissions without LUCF. Uncertainty in the trend was estimated at 4.5%. The values are in the same order of magnitude as those estimated in the Tier 1. The results show that accounting for correlation among parameters is important, and for the Netherlands inventory it has a larger impact on the uncertainty in the trend than on the uncertainty in the total GHG emissions. The main contributors to overall uncertainty are found to be related to N 2O emissions from agricultural soils, the N 2O implied emission factors of Nitric Acid Production, CH 4 from managed solid waste disposal on land, and the implied emission factor of CH 4 from manure management from cattle.

  2. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    SciTech Connect

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  3. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  4. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  5. An analysis method for evaluating gradient-index fibers based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.

    2011-05-01

    We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.

  6. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    SciTech Connect

    Pratama, Cecep; Meilano, Irwan; Nugraha, Andri Dian

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  7. The timing resolution of scintillation-detector systems: Monte Carlo analysis.

    PubMed

    Choong, Woon-Seng

    2009-11-01

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and

  8. Monte carlo uncertainty analysis for photothermal radiometry measurements using a curve fit process

    NASA Astrophysics Data System (ADS)

    Horne, Kyle; Fleming, Austin; Timmins, Ben; Ban, Heng

    2015-12-01

    Photothermal radiometry (PTR) has become a popular method to measure thermal properties of layered materials. Much research has been done to determine the capabilities of PTR, but with little uncertainty analysis. This study reports a Monte Carlo uncertainty analysis to quantify uncertainty of film diffusivity and effusivity measurements, presents a sensitivity study for each input parameter, compares linear and logarithmic spacing of data points on frequency scans, and investigates the validity of a one-dimensional heat transfer assumption. Logarithmic spacing of frequencies when taking data is found to be unequivocally superior to linear spacing, while the use of a higher-dimensional heat transfer model is only needed for certain measurement configurations. The sensitivity analysis supports the frequency spacing conclusion, as well as explains trends seen in the uncertainty data.

  9. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  10. Monte Carlo models and analysis of galactic disk gamma-ray burst distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon

    1989-01-01

    Gamma-ray bursts are transient astronomical phenomena which have no quiescent counterparts in any region of the electromagnetic spectrum. Although temporal and spectral properties indicate that these events are likely energetic, their unknown spatial distribution complicates astrophysical interpretation. Monte Carlo samples of gamma-ray burst sources are created which belong to Galactic disk populations. Spatial analysis techniques are used to compare these samples to the observed distribution. From this, both quantitative and qualitative conclusions are drawn concerning allowed luminosity and spatial distributions of the actual sample. Although the Burst and Transient Source Experiment (BATSE) experiment on Gamma Ray Observatory (GRO) will significantly improve knowledge of the gamma-ray burst source spatial characteristics within only a few months of launch, the analysis techniques described herein will not be superceded. Rather, they may be used with BATSE results to obtain detailed information about both the luminosity and spatial distributions of the sources.

  11. A bottom collider vertex detector design, Monte-Carlo simulation and analysis package

    SciTech Connect

    Lebrun, P.

    1990-10-01

    A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.

  12. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  13. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  14. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  15. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.

    PubMed

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  16. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  17. Use of Monte Carlo simulations for Cultural Heritage X-ray fluorescence analysis

    NASA Astrophysics Data System (ADS)

    Brunetti, Antonio; Golosio, Bruno; Schoonjans, Tom; Oliva, Piernicola

    2015-06-01

    The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation.

  18. Contrast to Noise Ratio and Contrast Detail Analysis in Mammography:A Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Metaxas, V.; Delis, H.; Kalogeropoulou, C.; Zampakis, P.; Panayiotakis, G.

    2015-09-01

    The mammographic spectrum is one of the major factors affecting image quality in mammography. In this study, a Monte Carlo (MC) simulation model was used to evaluate image quality characteristics of various mammographic spectra. The anode/filter combinations evaluated, were those traditionally used in mammography, for tube voltages between 26 and 30 kVp. The imaging performance was investigated in terms of Contrast to Noise Ratio (CNR) and Contrast Detail (CD) analysis, by involving human observers, utilizing a mathematical CD phantom. Soft spectra provided the best characteristics in terms of both CNR and CD scores, while tube voltage had a limited effect. W-anode spectra filtered with k-edge filters demonstrated an improved performance, that sometimes was better compared to softer x-ray spectra, produced by Mo or Rh anode. Regarding the filter material, k-edge filters showed superior performance compared to Al filters.

  19. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    SciTech Connect

    Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.

    2013-07-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  20. Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount

    SciTech Connect

    Supriyadi; Srigutomo, Wahyu; Munandar, Arif

    2014-03-24

    Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.

  1. Backward Monte Carlo analysis on stray radiation of an infrared optical system

    NASA Astrophysics Data System (ADS)

    Chen, Xue; Sun, Chuang; Xia, Xinlin

    2013-09-01

    In an infrared optical system, the thermal radiation of high temperature components is the major noise as stray radiation that degrades the system performance. Backward Monte Carlo method based on radiation distribution factor is proposed to perform the stray radiation calculation. Theoretical deduction and some techniques are presented, considering the semitransparent element like IR window as radiation emitter. The radiation distribution factors are calculated with ray tracing from the detector to radiation sources. Propagation of stray radiation and its distribution on the detector are obtained simultaneously. It is unnecessary to implement ray tracing again to study the effect of different temperatures for a given system, expect that the geometry or radiative property is changed. An infrared system is simulated using this method. Two different situations are discussed and the analysis shows that stray radiation is mainly created by IR window and lens tube.

  2. STS-1 operational flight profile. Volume 5: Descent, cycle 3. Appendix C: Monte Carlo dispersion analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of three nonlinear the Monte Carlo dispersion analyses for the Space Transportation System 1 Flight (STS-1) Orbiter Descent Operational Flight Profile, Cycle 3 are presented. Fifty randomly selected simulation for the end of mission (EOM) descent, the abort once around (AOA) descent targeted line are steep target line, and the AOA descent targeted to the shallow target line are analyzed. These analyses compare the flight environment with system and operational constraints on the flight environment and in some cases use simplified system models as an aid in assessing the STS-1 descent flight profile. In addition, descent flight envelops are provided as a data base for use by system specialists to determine the flight readiness for STS-1. The results of these dispersion analyses supersede results of the dispersion analysis previously documented.

  3. Jet-Track Correlation Analysis of Monte Carlo Simulated Data for the CMS Experiment

    NASA Astrophysics Data System (ADS)

    Tabb, William; Evdokimov, Olga

    2015-10-01

    Collisions of ultra-relativistic heavy ion beams delivered at the Large Hadron Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC) are used to study the properties of a special form of nuclear matter, termed Quark Gluon Plasma (QGP). Among the experimental tools actively employed to explore the QGP properties, are jets - collimated streams of particles produced from hard-scattered partons in the initial state of the collision. The hot and dense medium produced in heavy ion collisions interacts strongly with the partons traversing it, resulting in energy loss and modifications to the momentum distributions of the forming jets. A development of the jet-track correlation analysis was performed and tested with Monte Carlo (MC) data samples simulating jet data for the CMS experiment at LHC, allowing study of several jet-related properties in order to better understand the medium effects on penetrating probes.

  4. Monte Carlo Analysis of the Commissioning Phase Maneuvers of the Soil Moisture Active Passive (SMAP) Mission

    NASA Technical Reports Server (NTRS)

    Williams, Jessica L.; Bhat, Ramachandra S.; You, Tung-Han

    2012-01-01

    The Soil Moisture Active Passive (SMAP) mission will perform soil moisture content and freeze/thaw state observations from a low-Earth orbit. The observatory is scheduled to launch in October 2014 and will perform observations from a near-polar, frozen, and sun-synchronous Science Orbit for a 3-year data collection mission. At launch, the observatory is delivered to an Injection Orbit that is biased below the Science Orbit; the spacecraft will maneuver to the Science Orbit during the mission Commissioning Phase. The delta V needed to maneuver from the Injection Orbit to the Science Orbit is computed statistically via a Monte Carlo simulation; the 99th percentile delta V (delta V99) is carried as a line item in the mission delta V budget. This paper details the simulation and analysis performed to compute this figure and the delta V99 computed per current mission parameters.

  5. Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI

    SciTech Connect

    Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.

    2008-04-15

    A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.

  6. An analysis of the convergence of the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Galitzine, Cyril; Boyd, Iain D.

    2015-05-01

    In this article, a rigorous framework for the analysis of the convergence of the direct simulation Monte Carlo (DSMC) method is presented. It is applied to the simulation of two test cases: an axisymmetric jet at a Knudsen number of 0.01 and Mach number of 1 and a two-dimensional cylinder flow at a Knudsen of 0.05 and Mach 10. The rate of convergence of sampled quantities is found to be well predicted by an extended form of the Central Limit Theorem that takes into account the correlation of samples but requires the calculation of correlation spectra. A simplified analytical model that does not require correlation spectra is then constructed to model the effect of sample correlation. It is then used to obtain an a priori estimate of the convergence error.

  7. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-01

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  8. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    SciTech Connect

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  9. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  10. Simulation of a complete triple turbo molecular pumping stage using direct simulation Monte Carlo in 3D

    NASA Astrophysics Data System (ADS)

    Rose, Martin

    2014-12-01

    A triple stage turbo molecular pump is simulated using the DSMC method in 3D. A 90° sector of the complete pump is simulated taking the symmetry of the pump into account. Simulations were performed for various fore line pressures in order to determine the compression ratio and the maximum pumping speed. Various features of the three dimensional flow field are discussed. Also the CPU time required to obtain the flow field is discussed. The simulations presented here are a powerful tool for the design and improvement of turbo molecular pumps.

  11. Generation of SFR few-group constants using the Monte Carlo code Serpent

    SciTech Connect

    Fridman, E.; Rachamin, R.; Shwageraus, E.

    2013-07-01

    In this study, the Serpent Monte Carlo code was used as a tool for preparation of homogenized few-group cross sections for the nodal diffusion analysis of Sodium cooled Fast Reactor (SFR) cores. Few-group constants for two reference SFR cores were generated by Serpent and then employed by nodal diffusion code DYN3D in 2D full core calculations. The DYN3D results were verified against the references full core Serpent Monte Carlo solutions. A good agreement between the reference Monte Carlo and nodal diffusion results was observed demonstrating the feasibility of using Serpent for generation of few-group constants for the deterministic SFR analysis. (authors)

  12. Patient-Specific 3D Pretreatment and Potential 3D Online Dose Verification of Monte Carlo-Calculated IMRT Prostate Treatment Plans

    SciTech Connect

    Boggula, Ramesh; Jahnke, Lennart; Wertz, Hansjoerg; Lohr, Frank; Wenz, Frederik

    2011-11-15

    Purpose: Fast and reliable comprehensive quality assurance tools are required to validate the safety and accuracy of complex intensity-modulated radiotherapy (IMRT) plans for prostate treatment. In this study, we evaluated the performance of the COMPASS system for both off-line and potential online procedures for the verification of IMRT treatment plans. Methods and Materials: COMPASS has a dedicated beam model and dose engine, it can reconstruct three-dimensional dose distributions on the patient anatomy based on measured fluences using either the MatriXX two-dimensional (2D) array (offline) or a 2D transmission detector (T2D) (online). For benchmarking the COMPASS dose calculation, various dose-volume indices were compared against Monte Carlo-calculated dose distributions for five prostate patient treatment plans. Gamma index evaluation and absolute point dose measurements were also performed in an inhomogeneous pelvis phantom using extended dose range films and ion chamber for five additional treatment plans. Results: MatriXX-based dose reconstruction showed excellent agreement with the ion chamber (<0.5%, except for one treatment plan, which showed 1.5%), film ({approx}100% pixels passing gamma criteria 3%/3 mm) and mean dose-volume indices (<2%). The T2D based dose reconstruction showed good agreement as well with ion chamber (<2%), film ({approx}99% pixels passing gamma criteria 3%/3 mm), and mean dose-volume indices (<5.5%). Conclusion: The COMPASS system qualifies for routine prostate IMRT pretreatment verification with the MatriXX detector and has the potential for on-line verification of treatment delivery using T2D.

  13. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment. PMID:25487461

  14. An alternative approach to probabilistic seismic hazard analysis in the Aegean region using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Burton, Paul W.

    2010-09-01

    The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard

  15. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei

    2014-09-01

    Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  16. Derivation of landslide-triggering thresholds by Monte Carlo simulation and ROC analysis

    NASA Astrophysics Data System (ADS)

    Peres, David Johnny; Cancelliere, Antonino

    2015-04-01

    Rainfall thresholds of landslide-triggering are useful in early warning systems to be implemented in prone areas. Direct statistical analysis of historical records of rainfall and landslide data presents different shortcomings typically due to incompleteness of landslide historical archives, imprecise knowledge of the triggering instants, unavailability of a rain gauge located near the landslides, etc. In this work, a Monte Carlo approach to derive and evaluate landslide triggering thresholds is presented. Such an approach contributes to overcome some of the above mentioned shortcomings of direct empirical analysis of observed data. The proposed Monte Carlo framework consists in the combination of a rainfall stochastic model with hydrological and slope-stability model. Specifically, 1000-years long hourly synthetic rainfall and related slope stability factor of safety data are generated by coupling the Neyman-Scott rectangular pulses model with the TRIGRS unsaturated model (Baum et al., 2008) and a linear-reservoir water table recession model. Triggering and non-triggering rainfall events are then distinguished and analyzed to derive stochastic-input physically based thresholds that optimize the trade-off between correct and wrong predictions. For this purpose, receiver operating characteristic (ROC) indices are used. An application of the method to the highly landslide-prone area of the Peloritani mountains in north-eastern Sicily (Italy) is carried out. A threshold for the area is derived and successfully validated by comparison with thresholds proposed by other researchers. Moreover, the uncertainty in threshold derivation due to variability of rainfall intensity within events and to antecedent rainfall is investigated. Results indicate that variability of intensity during rainfall events influences significantly rainfall intensity and duration associated with landslide triggering. A representation of rainfall as constant-intensity hyetographs globally leads to

  17. 3D Monte Carlo simulation of solar radiance in the clear-sky and low-cloud atmosphere for retrieval of aerosol and cloud characteristics

    NASA Astrophysics Data System (ADS)

    Zhuravleva, Tatiana; Bedareva, Tatiana; Nasrtdinov, Ilmir

    As is well known, the spectral measurements of direct and diffuse solar radiation can be used to retrieve the optical and microphysical characteristics of atmospheric aerosol and clouds. Most methods of radiation calculations, which are used to solve the inverse problems, are implemented under the assumption of horizontal homogeneity of the atmosphere (clear-sky and overcast conditions). However, it is recognized that the 3D effects of clouds have a significant impact on the transfer of solar radiation in the atmosphere which can be the cause of errors in retrieval of aerosol and cloud properties. In this work, we present the algorithms of the Monte Carlo method for calculating the angular structure of diffuse radiation in the molecular-aerosol atmosphere and the appearance of isolated cloud. The simulation of radiative characteristics with specified spectral resolution is performed in spherical model of the atmosphere for the conditions of observations at the Earth’s surface and at the top of the atmosphere. Cloud is approximated by inverted paraboloid. The molecular absorption is accounted for on the basis of approximation of transmission function by short exponential series (k-distribution method). The specific features of the radiative transfer, caused by the 3D effects of clouds, are considered depending on cloud location in space and its sizes, sensing scheme, and illumination conditions. The simulation results of the brightness fields in the clear sky and in the appearance of isolated cloud are compared. This work was supported in part by the Russian Fund for Basic Research (through the grant no. 12-05-00169).

  18. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2010-01-01

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PMID:20160682

  19. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    PubMed

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value. PMID:27386264

  20. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  1. Monte Carlo simulations of image analysis for flexible and high-resolution registration metrology

    NASA Astrophysics Data System (ADS)

    Arnz, M.; Klose, G.; Troll, G.; Beyer, D.; Mueller, A.

    2009-01-01

    The continuous progress of PROVE, the new photomask registration and overlay measurement tool currently under development at Carl Zeiss has been reported at mask related conferences since it's first publication at EMLC 2008. The project has moved in the past year from a final design on paper to functional hardware in the lab. Major tool components such as the climate control unit, the automated mask handling system and the metrology stage have been assembled and successfully tested. The scope of this paper is to report on the current status of PROVE and furthermore present results from simulations utilizing the image analysis routines of the tool. Monte-Carlo simulations were used to analyze the impact of several realistic tool limitations (camera noise, stage and focus noise and imaging telecentricity) on the image analysis process. The evaluation itself was based on a conventional threshold approach to perform both registration and CD measurement simultaneously. The results show, that the routines can deal with the tool imperfections and limit the contribution to the reproducibility error for standard registration markers to a negligible part. Even single contact holes suffer only from small errors, when camera noise is low and image averaging is increased. Employing a generally used test pattern the CD test results also confirm a sufficiently small error contribution to the CD non-uniformity reproducibility.

  2. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    SciTech Connect

    Hall, Howard L

    2012-01-01

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  3. A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers

    NASA Technical Reports Server (NTRS)

    Green, R. N.; Hoffman, L. H.; Young, G. R.

    1972-01-01

    A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.

  4. Analysis of Monte Carlo methods applied to blackbody and lower emissivity cavities.

    PubMed

    Pahl, Robert J; Shannon, Mark A

    2002-02-01

    Monte Carlo methods are often applied to the calculation of the apparent emissivities of blackbody cavities. However, for cavities with complex as well as some commonly encountered geometries, the emission Monte Carlo method experiences problems of convergence. The emission and absorption Monte Carlo methods are compared on the basis of ease of implementation and convergence speed when applied to blackbody sources. A new method to determine solution convergence compatible with both methods is developed, and the convergence speeds of the two methods are compared through the application of both methods to a right-circular cylinder cavity. It is shown that the absorption method converges faster and is easier to implement than the emission method when applied to most blackbody and lower emissivity cavities. PMID:11993915

  5. A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data

    SciTech Connect

    Garner, James R; Whitaker, J Michael

    2013-01-01

    As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.

  6. Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection

    SciTech Connect

    Billard, J.; Mayet, F.; Santos, D.

    2011-04-01

    Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.

  7. Monte Carlo analysis of dissociation and recombination behind strong shock waves in nitrogen

    NASA Technical Reports Server (NTRS)

    Boyd, I. D.

    1991-01-01

    Computations are presented for the relaxation zone behind strong, 1D shock waves in nitrogen. The analysis is performed with the direct simulation Monte Carlo method (DSMC). The DSMC code is vectorized for efficient use on a supercomputer. The code simulates translational, rotational and vibrational energy exchange and dissociative and recombinative chemical reactions. A model is proposed for the treatment of three body-recombination collisions in the DSMC technique which usually simulates binary collision events. The model improves previous models because it can be employed with a large range of chemical-rate data, does not introduce into the flow field troublesome pairs of atoms which may recombine upon further collision (pseudoparticles) and is compatible with the vectorized code. The computational results are compared with existing experimental data. It is shown that the derivation of chemical-rate coefficients must account for the degree of vibrational nonequilibrium in the flow. A nonequilibrium-chemistry model is employed together with equilibrium-rate data to compute the flow in several different nitrogen shock waves.

  8. Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano

    2014-02-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 σ of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.

  9. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    SciTech Connect

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakage frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.

  10. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGESBeta

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  11. The use of Monte Carlo analysis for exposure assessment of an estuarine food web

    SciTech Connect

    Iannuzzi, T.J.; Shear, N.M.; Harrington, N.W.; Henning, M.H.

    1995-12-31

    Despite apparent agreement within the scientific community that probabilistic methods of analysis offer substantially more informative exposure predictions than those offered by the traditional point estimate approach, few risk assessments conducted or approved by state and federal regulatory agencies have used probabilistic methods. Among the likely deterrents to application of probabilistic methods to ecological risk assessment is the absence of ``standard`` data distributions that are considered applicable to most conditions for a given ecological receptor. Indeed, point estimates of ecological exposure factor values for a limited number of wildlife receptors have only recently been published. The Monte Carlo method of probabilistic modeling has received increasing support as a promising technique for characterizing uncertainty and variation in estimates of exposure to environmental contaminants. An evaluation of literature on the behavior, physiology, and ecology of estuarine organisms was conducted in order to identify those variables that most strongly influence uptake of xenobiotic chemicals from sediments, water and food sources. The ranges, central tendencies, and distributions of several key parameter values for polychaetes (Nereis sp.), mummichog (Fundulus heteroclitus), blue crab (Callinectes sapidus), and striped bass (Morone saxatilis) in east coast estuaries were identified. Understanding the variation in such factors, which include feeding rate, growth rate, feeding range, excretion rate, respiration rate, body weight, lipid content, food assimilation efficiency, and chemical assimilation efficiency, is critical to the understanding the mechanisms that control the uptake of xenobiotic chemicals in aquatic organisms, and to the ability to estimate bioaccumulation from chemical exposures in the aquatic environment.

  12. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    NASA Technical Reports Server (NTRS)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  13. Monte Carlo simulations of GeoPET experiments: 3D images of tracer distributions (18F, 124I and 58Co) in Opalinus clay, anhydrite and quartz

    NASA Astrophysics Data System (ADS)

    Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna

    2013-08-01

    Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.

  14. TH-C-12A-08: New Compact 10 MV S-Band Linear Accelerator: 3D Finite-Element Design and Monte Carlo Dose Simulations

    SciTech Connect

    Baillie, D; St Aubin, J; Fallone, B; Steciw, S

    2014-06-15

    Purpose: To design a new compact S-band linac waveguide capable of producing a 10 MV x-ray beam, while maintaining the length (27.5 cm) of current 6 MV waveguides. This will allow higher x-ray energies to be used in our linac-MRI systems with the same footprint. Methods: Finite element software COMSOL Multiphysics was used to design an accelerator cavity matching one published in an experiment breakdown study, to ensure that our modeled cavities do not exceed the threshold electric fields published. This cavity was used as the basis for designing an accelerator waveguide, where each cavity of the full waveguide was tuned to resonate at 2.997 GHz by adjusting the cavity diameter. The RF field solution within the waveguide was calculated, and together with an electron-gun phase space generated using Opera3D/SCALA, were input into electron tracking software PARMELA to compute the electron phase space striking the x-ray target. This target phase space was then used in BEAM Monte Carlo simulations to generate percent depth doses curves for this new linac, which were then used to re-optimize the waveguide geometry. Results: The shunt impedance, Q-factor, and peak-to-mean electric field ratio were matched to those published for the breakdown study to within 0.1% error. After tuning the full waveguide, the peak surface fields are calculated to be 207 MV/m, 13% below the breakdown threshold, and a d-max depth of 2.42 cm, a D10/20 value of 1.59, compared to 2.45 cm and 1.59, respectively, for the simulated Varian 10 MV linac and brehmsstrahlung production efficiency 20% lower than a simulated Varian 10 MV linac. Conclusion: This work demonstrates the design of a functional 27.5 cm waveguide producing 10 MV photons with characteristics similar to a Varian 10 MV linac.

  15. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  16. Effect of lag time distribution on the lag phase of bacterial growth - a Monte Carlo analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study is to use Monte Carlo simulation to evaluate the effect of lag time distribution of individual bacterial cells incubated under isothermal conditions on the development of lag phase. The growth of bacterial cells of the same initial concentration and mean lag phase durati...

  17. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  18. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    PubMed

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators. PMID:25811254

  19. Identification of Thyroid Receptor Ant/Agonists in Water Sources Using Mass Balance Analysis and Monte Carlo Simulation

    PubMed Central

    Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia

    2013-01-01

    Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563

  20. Multi-wavelength simulations of atmospheric radiation from Io with a 3-D spherical-shell backward Monte Carlo radiative transfer model

    NASA Astrophysics Data System (ADS)

    Gratiy, Sergey L.; Walker, Andrew C.; Levin, Deborah A.; Goldstein, David B.; Varghese, Philip L.; Trafton, Laurence M.; Moore, Chris H.

    2010-05-01

    Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. Improving upon previous models, Walker et al. (Walker, A.C., Gratiy, S.L., Levin, D.A., Goldstein, D.B., Varghese, P.L., Trafton, L.M., Moore, C.H., Stewart, B. [2009]. Icarus) developed a fully 3-D global rarefied gas dynamics model of Io's atmosphere including both sublimation and volcanic sources of SO 2 gas. The fidelity of the model is tested by simulating remote observations at selected wavelength bands and comparing them to the corresponding astronomical observations of Io's atmosphere. The simulations are performed with a new 3-D spherical-shell radiative transfer code utilizing a backward Monte Carlo method. We present: (1) simulations of the mid-infrared disk-integrated spectra of Io's sunlit hemisphere at 19 μm, obtained with TEXES during 2001-2004; (2) simulations of disk-resolved images at Lyman- α obtained with the Hubble Space Telescope (HST), Space Telescope Imaging Spectrograph (STIS) during 1997-2001; and (3) disk-integrated simulations of emission line profiles in the millimeter wavelength range obtained with the IRAM-30 m telescope in October-November 1999. We found that the atmospheric model generally reproduces the longitudinal variation in band depth from the mid-infrared data; however, the best match is obtained when our simulation results are shifted ˜30° toward lower orbital longitudes. The simulations of Lyman- α images do not reproduce the mid-to-high latitude bright patches seen in the observations, suggesting that the model atmosphere sustains columns that are too high at those latitudes. The simulations of emission line profiles in the millimeter spectral region support

  1. Monte Carlo simulations for analysis and design of nuclear isomer experiments

    NASA Astrophysics Data System (ADS)

    Winick, Tristan; Goddard, Brian; Carroll, James

    2014-09-01

    The well-established GEANT4 Monte Carlo code was used to analyze the results from a test of bremsstrahlung-induced nuclear isomer switching and to guide development of an experiment to test nuclear excitation by electron capture (NEEC). Bremsstrahlung-induced experiments have historically been analyzed with the assumption that the photon flux of the bremsstrahlung spectrum at a given energy varies linearly with the spectrum's endpoint. The results obtained with GEANT4 suggest that this assumption is not justified; the revised function differs enough to warrant a re-analysis of the experimental data. This re-analysis has been applied to the switching of the unusually long-lived isomer of 180Ta (T1/2 > 1016 yr.), showing that the energies of its switching states differ by about 30 keV compared to those previously identified. GEANT4 was also employed in the design of a NEEC experiment to test the isomer switching of 93Mo via coupled atomic-nuclear processes. Initial work involved modeling a beam of 93Mo ions incident on a volume of 4He gas and observing the charge exchange process and associated emitted fluorescence. The beam and 4He volume, the ionization trails of the electrons liberated from the 4He atoms, and the subsequent fluorescence were successfully simulated; however, it was found that GEANT4 does not currently support ion charge exchange. Future work will entail either the development of the requisite code for GEANT4, or the use of a different model that can accurately simulate ion charge exchange.

  2. Using Monte Carlo techniques and parallel processing for fragmentation analysis of explosive payloads

    SciTech Connect

    LaFarge, R.A.

    1992-01-01

    Sandia National Laboratories (SNL) launched the Los Alamos National Laboratory (LANL) sponsored ZEST flight test program from the SNL Kauai Test Facility (KTF) in the summer of 1991. The ZEST program had about 255 pounds of high explosive (HE) aboard a Talos-Castor launch vehicle. Naturally, such undertakings raise questions about the safety of personnel and the environment in the event of a premature detonation of the HE. These questions pertain not only to KTF and the island of Kauai but to the neighboring islands as well. The ability to determine realistically P{sub I}, the probability of an explosively generated fragment impacting a given exclusion area, is an important factor in the safety analysis of any flight test involving explosive payloads. Once P{sub I} is known, the casualty expectations C{sub E} can be computed based on local demographics. A set of two computer codes was developed to determine P{sub I} based on computed fragment impacts. One of these codes, SAFETIE1 (Sandia Analysis of FragmEnt TrajectorIEs), computes files of trajectory initial conditions generated in a Monte Carlo sense for a set of n explosions each containing m fragments. These initial condition files are then used to compute trajectories in a parallel processing environment using a local area network (LAN) of 40 Sun workstations. This approach saves the equivalent of 40 hours of Cray YMP time. The other code, SAFETIE2, is a postprocessor that uses an AMEER output file generated by SAFETIE1, to determine how many explosions have at least one fragment in a user defined exclusion area. The average number of fragments per explosion in the exclusion area ({bar N}) is also computed (for C{sub E} considerations). 12 refs.

  3. Investing in a robotic milking system: a Monte Carlo simulation analysis.

    PubMed

    Hyde, J; Engel, P

    2002-09-01

    This paper uses Monte Carlo simulation methods to estimate the breakeven value for a robotic milking system (RMS) on a dairy farm in the United States. The breakeven value indicates the maximum amount that could be paid for the robots given the costs of alternative milking equipment and other important factors (e.g., milk yields, prices, length of useful life of technologies). The analysis simulates several scenarios under three herd sizes, 60, 120, and 180 cows. The base-case results indicate that the mean breakeven values are $192,056, $374,538, and $553,671 for each of the three progressively larger herd sizes. These must be compared to the per-unit RMS cost (about $125,000 to $150,000) and the cost of any construction or installation of other equipment that accompanies the RMS. Sensitivity analysis shows that each additional dollar spent on milking labor in the parlor increases the breakeven value by $4.10 to $4.30. Each dollar increase in parlor costs increases the breakeven value by $0.45 to $0.56. Also, each additional kilogram of initial milk production (under a 2x system in the parlor) decreases the breakeven by $9.91 to $10.64. Finally, each additional year of useful life for the RMS increases the per-unit breakeven by about $16,000 while increasing the life of the parlor by 1 yr decreases the breakeven value by between $5,000 and $6,000. PMID:12362453

  4. Hierarchical Monte Carlo modeling with S-distributions: Concepts and illustrative analysis of mercury contamination in King Mackerel

    SciTech Connect

    Voit, E.O.; Balthis, W.L.; Holser, R.A.

    1995-12-31

    The quantitative assessment of environmental contaminants is a complex process. It involves nonlinear models and the characterization of variables, factors, and parameters that are distributed and dependent on each other. Assessments based on point estimates are easy to perform, but since they are unreliable, Monte Carlo simulations have become a standard procedure. Simulations pose two challenges: They require the numerical characterization of parameter distributions and they do not account for dependencies between parameters. This paper offers strategies for dealing with both challenges. The first part discusses the characterization of data with the S-distribution. This distribution offers several advantages, which include simplicity of numerical analysis, flexibility in shape, and easy computation of quantiles. The second part outlines how the S-distribution can be used for hierarchical Monte Carlo simulations. In these simulations the selection of parameter values occurs sequentially, and each choice depends on the parameter values selected before. The method is illustrated with preliminary simulation analyses that are concerned with mercury contamination in king mackerel (Scomberomorus cavalla). It is demonstrated that the results of such hierarchical simulations are generally different from those of traditional Monte Carlo simulations.

  5. Regeneration and Fixed-Width Analysis of Markov Chain Monte Carlo Algorithms

    NASA Astrophysics Data System (ADS)

    Latuszynski, Krzysztof

    2009-07-01

    In the thesis we take the split chain approach to analyzing Markov chains and use it to establish fixed-width results for estimators obtained via Markov chain Monte Carlo procedures (MCMC). Theoretical results include necessary and sufficient conditions in terms of regeneration for central limit theorems for ergodic Markov chains and a regenerative proof of a CLT version for uniformly ergodic Markov chains with E_{π}f^2< infty. To obtain asymptotic confidence intervals for MCMC estimators, strongly consistent estimators of the asymptotic variance are essential. We relax assumptions required to obtain such estimators. Moreover, under a drift condition, nonasymptotic fixed-width results for MCMC estimators for a general state space setting (not necessarily compact) and not necessarily bounded target function f are obtained. The last chapter is devoted to the idea of adaptive Monte Carlo simulation and provides convergence results and law of large numbers for adaptive procedures under path-stability condition for transition kernels.

  6. Analysis of single Monte Carlo methods for prediction of reflectance from turbid media

    PubMed Central

    Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

    2011-01-01

    Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

  7. Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.

    PubMed

    Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

    2011-09-26

    Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

  8. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2013-04-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  9. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    SciTech Connect

    Ivanov, A.; Sanchez, V.; Hoogenboom, J. E.

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  10. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2013-07-15

    Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p

  11. Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

    SciTech Connect

    Dupuis, Paul

    2014-03-14

    This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.

  12. Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope

    NASA Astrophysics Data System (ADS)

    Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao

    2015-10-01

    X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through

  13. Monte Carlo homogenized limit analysis model for randomly assembled blocks in-plane loaded

    NASA Astrophysics Data System (ADS)

    Milani, Gabriele; Lourenço, Paulo B.

    2010-11-01

    A simple rigid-plastic homogenization model for the limit analysis of masonry walls in-plane loaded and constituted by the random assemblage of blocks with variable dimensions is proposed. In the model, blocks constituting a masonry wall are supposed infinitely resistant with a Gaussian distribution of height and length, whereas joints are reduced to interfaces with frictional behavior and limited tensile and compressive strength. Block by block, a representative element of volume (REV) is considered, constituted by a central block interconnected with its neighbors by means of rigid-plastic interfaces. The model is characterized by a few material parameters, is numerically inexpensive and very stable. A sub-class of elementary deformation modes is a-priori chosen in the REV, mimicking typical failures due to joints cracking and crushing. Masonry strength domains are obtained equating the power dissipated in the heterogeneous model with the power dissipated by a fictitious homogeneous macroscopic plate. Due to the inexpensiveness of the approach proposed, Monte Carlo simulations can be repeated on the REV in order to have a stochastic estimation of in-plane masonry strength at different orientations of the bed joints with respect to external loads accounting for the geometrical statistical variability of blocks dimensions. Two cases are discussed, the former consisting on full stochastic REV assemblages (obtained considering a random variability of both blocks height an length) and the latter assuming the presence of a horizontal alignment along bed joints, i.e. allowing blocks height variability only row by row. The case of deterministic blocks height (quasi-periodic texture) can be obtained as a subclass of this latter case. Masonry homogenized failure surfaces are finally implemented in an upper bound FE limit analysis code for the analysis at collapse of entire walls in-plane loaded. Two cases of engineering practice, consisting on the prediction of the failure

  14. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  15. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    SciTech Connect

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-07-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k{sub eff} calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  16. Monte Carlo-based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2014-04-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures - for example, by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow for a more detailed analysis of the dynamic behaviour of the soil-plant interface. We coupled two of such high-process-oriented independent models and calibrated both models simultaneously. The catchment modelling framework (CMF) simulated soil hydrology based on the Richards equation and the van Genuchten-Mualem model of the soil hydraulic properties. CMF was coupled with the plant growth modelling framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo-based generalized likelihood uncertainty estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from a uniform distribution. The model was applied to three sites with different management in Müncheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matter of roots, storages, stems and leaves. The shape parameter of the retention curve n was highly constrained, whereas other parameters of the retention curve showed a large equifinality. We attribute this slightly poorer model performance to missing leaf senescence, which is currently not implemented in PMF. The most constrained parameters for the

  17. Performance analysis of the Monte Carlo code MCNP4A for photon-based radiotherapy applications

    SciTech Connect

    DeMarco, J.J.; Solberg, T.D.; Wallace, R.E.; Smathers, J.B.

    1995-12-31

    The Los Alamos code MCNP4A (Monte Carlo M-Particle version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boron neutron capture therapy. This study is designed to evaluate MCNP4A as the dose calculation system for photon-based radiotherapy applications. A graphical user interface (MCNP Radiation Therapy) has been developed which automatically sets up the geometry and photon source requirements for three-dimensional simulations using Computed Tomography (CT) data. Preliminary results suggest the code is capable of calculating satisfactory dose distributions in a variety of simulated homogeneous and heterogeneous phantoms. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. MCNPRT allows the user to analyze the performance of MCNP4A as a function of material, geometry resolution and MCNP4A photon and electron physics parameters. A typical simulation geometry consists of a 10 MV photon point source incident on a 15 x 15 x 15 cm{sup 3} phantom composed of water voxels ranging in size from 10 x 10 x 10 mm{sup 3} to 2 x 2 x 2 mm{sup 3}. As the voxel size is decreased, a larger percentage of time is spent tracking photons through the voxelized geometry as opposed to the secondary electrons. A PRPR Patch file is under development that will optimize photon transport within the simulation phantom specifically for radiotherapy applications. MCNP4A also supports parallel processing capabilities via the Parallel Virtual Machine (PVM) message passing system. A dedicated network of five SUN SPARC2 processors produced a wall-clock speedup of 4.4 based on a simulation phantom containing 5 x 5 x 5 mm{sup 3} water voxels. The code was also tested on the 80 node IBM RS/6000 cluster at the Maui High Performance Computing Center (NHPCC). A non-dedicated system of 75 processors produces a wall clock speedup of 29 relative to one SUN SPARC2 computer.

  18. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  19. Image guided radiation therapy applications for head and neck, prostate, and breast cancers using 3D ultrasound imaging and Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Fraser, Danielle

    In radiation therapy an uncertainty in the delivered dose always exists because anatomic changes are unpredictable and patient specific. Image guided radiation therapy (IGRT) relies on imaging in the treatment room to monitor the tumour and surrounding tissue to ensure their prescribed position in the radiation beam. The goal of this thesis was to determine the dosimetric impact on the misaligned radiation therapy target for three cancer sites due to common setup errors; organ motion, tumour tissue deformation, changes in body habitus, and treatment planning errors. For this purpose, a novel 3D ultrasound system (Restitu, Resonant Medical, Inc.) was used to acquire a reference image of the target in the computed tomography simulation room at the time of treatment planning, to acquire daily images in the treatment room at the time of treatment delivery, and to compare the daily images to the reference image. The measured differences in position and volume between daily and reference geometries were incorporated into Monte Carlo (MC) dose calculations. The EGSnrc (National Research Council, Canada) family of codes was used to model Varian linear accelerators and patient specific beam parameters, as well as to estimate the dose to the target and organs at risk under several different scenarios. After validating the necessity of MC dose calculations in the pelvic region, the impact of interfraction prostate motion, and subsequent patient realignment under the treatment beams, on the delivered dose was investigated. For 32 patients it is demonstrated that using 3D conformal radiation therapy techniques and a 7 mm margin, the prescribed dose to the prostate, rectum, and bladder is recovered within 0.5% of that planned when patient setup is corrected for prostate motion, despite the beams interacting with a new external surface and internal tissue boundaries. In collaboration with the manufacturer, the ultrasound system was adapted from transabdominal imaging to neck

  20. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    SciTech Connect

    Krapp, D.M. Jr.

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  1. Monte Carlo analysis of germanium detector performance in slow positron beam experiments

    NASA Astrophysics Data System (ADS)

    Heikinheimo, J.; Tuominen, R.; Tuomisto, F.

    2016-01-01

    Positron annihilation Doppler broadening spectroscopy is one of the most popular positron annihilation vacancy characterization techniques in experimental materials research. The measurements are often carried out with a slow positron beam setup, which enables depth profiling of the samples. The key measurement devices of Doppler broadening spectroscopy setups are high-purity germanium detectors. Since Doppler broadening spectroscopy is one of the standard techniques in defect characterization, there is a demand to evaluate different kinds of factors that might have an effect on the results. Here we report the results of Monte Carlo simulations of detector response in different geometries and compare the data to experiments.

  2. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.

    PubMed

    Piels, Molly; Zibar, Darko

    2016-02-01

    The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783

  3. Determination of combined measurement uncertainty via Monte Carlo analysis for the imaging spectrometer ROSIS.

    PubMed

    Lenhard, Karim

    2012-06-20

    To enable traceability of imaging spectrometer data, the associated measurement uncertainties have to be provided reliably. Here a new tool for a Monte-Carlo-type measurement uncertainty propagation for the uncertainties that originate from the spectrometer itself is described. For this, an instrument model of the imaging spectrometer ROSIS is used. Combined uncertainties are then derived for radiometrically and spectrally calibrated data using a synthetic at-sensor radiance spectrum as input. By coupling this new software tool with an inverse modeling program, the measurement uncertainties are propagated for an exemplary water data product. PMID:22722281

  4. Monte Carlo analysis of lobular gas-surface scattering in tubes applied to thermal transpiration

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Raquet, C. A.

    1972-01-01

    A model of rarefied gas flow in tubes was developed which combines a lobular distribution with diffuse reflection at the wall. The model with Monte Carlo techniques was used to explain previously observed deviations in the free molecular thermal transpiration ratio which suggest molecules can have a greater tube transmission probability in a hot-to-cold direction than in a cold-to-hot direction. The model yields correct magnitudes of transmission probability ratios for helium in Pyrex tubing (1.09 to 1.14), and some effects of wall-temperature distribution, tube surface roughness, tube dimensions, gas temperature, and gas molecular mass.

  5. Analysis and Monte Carlo simulation of near-terminal aircraft flight paths

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1982-01-01

    The flight paths of arriving and departing aircraft at an airport are stochastically represented. Radar data of the aircraft movements are used to decompose the flight paths into linear and curvilinear segments. Variables which describe the segments are derived, and the best fitting probability distributions of the variables, based on a sample of flight paths, are found. Conversely, given information on the probability distribution of the variables, generation of a random sample of flight paths in a Monte Carlo simulation is discussed. Actual flight paths at Dulles International Airport are analyzed and simulated.

  6. Monte Carlo variance reduction

    NASA Technical Reports Server (NTRS)

    Byrn, N. R.

    1980-01-01

    Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.

  7. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    NASA Astrophysics Data System (ADS)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  8. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  9. Sensitivity analysis of an asymmetric Monte Carlo beam model of a Siemens Primus accelerator.

    PubMed

    Schreiber, Eric C; Sawkey, Daren L; Faddegon, Bruce A

    2012-01-01

    The assumption of cylindrical symmetry in radiotherapy accelerator models can pose a challenge for precise Monte Carlo modeling. This assumption makes it difficult to account for measured asymmetries in clinical dose distributions. We have performed a sensitivity study examining the effect of varying symmetric and asymmetric beam and geometric parameters of a Monte Carlo model for a Siemens PRIMUS accelerator. The accelerator and dose output were simulated using modified versions of BEAMnrc and DOSXYZnrc that allow lateral offsets of accelerator components and lateral and angular offsets for the incident electron beam. Dose distributions were studied for 40 × 40 cm² fields. The resulting dose distributions were analyzed for changes in flatness, symmetry, and off-axis ratio (OAR). The electron beam parameters having the greatest effect on the resulting dose distributions were found to be electron energy and angle of incidence, as high as 5% for a 0.25° deflection. Electron spot size and lateral offset of the electron beam were found to have a smaller impact. Variations in photon target thickness were found to have a small effect. Small lateral offsets of the flattening filter caused significant variation to the OAR. In general, the greatest sensitivity to accelerator parameters could be observed for higher energies and off-axis ratios closer to the central axis. Lateral and angular offsets of beam and accelerator components have strong effects on dose distributions, and should be included in any high-accuracy beam model. PMID:22402376

  10. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    SciTech Connect

    Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.

    2012-07-01

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

  11. Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

    2009-01-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  12. Monte Carlo based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2013-12-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The

  13. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  14. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    SciTech Connect

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the

  15. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. PMID:25159436

  16. Asymptotic analysis of the spatial discretization of radiation absorption and re-emission in Implicit Monte Carlo

    SciTech Connect

    Densmore, Jeffery D.

    2011-02-20

    We perform an asymptotic analysis of the spatial discretization of radiation absorption and re-emission in Implicit Monte Carlo (IMC), a Monte Carlo technique for simulating nonlinear radiative transfer. Specifically, we examine the approximation of absorption and re-emission by a spatially continuous artificial-scattering process and either a piecewise-constant or piecewise-linear emission source within each spatial cell. We consider three asymptotic scalings representing (i) a time step that resolves the mean-free time, (ii) a Courant limit on the time-step size, and (iii) a fixed time step that does not depend on any asymptotic scaling. For the piecewise-constant approximation, we show that only the third scaling results in a valid discretization of the proper diffusion equation, which implies that IMC may generate inaccurate solutions with optically large spatial cells if time steps are refined. However, we also demonstrate that, for a certain class of problems, the piecewise-linear approximation yields an appropriate discretized diffusion equation under all three scalings. We therefore expect IMC to produce accurate solutions for a wider range of time-step sizes when the piecewise-linear instead of piecewise-constant discretization is employed. We demonstrate the validity of our analysis with a set of numerical examples.

  17. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGESBeta

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  18. Qualitative analysis of irregular fields delivered with dual electron multileaf collimator: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Inyang, Samuel Okon; Chamberlain, Alan

    2016-03-01

    The use of a dual electron multileaf collimator (eMLC) to collimate therapeutic electron beam without the use of cutouts has been previously shown to be feasible. Further Monte Carlo simulations were performed in this study to verify the nature and appearance of the isodose distribution in water phantom of irregular electron beams delivered by the eMLC. Electron fields used in this study were selected to reflect those used in electron beam therapy. Results of this study show that the isodose distribution in a water phantom obtained from the simulation of irregular electron beams through the eMLC conforms to the pattern of the eMLC used in the delivery of the beam. It is therefore concluded that the dual eMLC could deliver isodose distributions reflecting the pattern of the eMLC field that was used in the delivery of the beam.

  19. Combined EPMA, FIB and Monte Carlo simulation: a versatile tool for quantitative analysis of multilayered structures

    NASA Astrophysics Data System (ADS)

    Richter, S.; Pinard, P. T.

    2016-02-01

    Electron probe microanalysis and focussed ion beam milling are combined to improve the sensitivity and applicability of depth profiling quantification. With the nanoscale milling capabilities of the ion beam, very shallow bevels are milled by using a special preparation procedure to reduce any curtaining effect and minimize Ga ions implantation. A Ni/Cr multilayered specimen is used to evaluate the depth resolution. The best results are obtained by a well-focussed electron beam offered by a field-emission microprobe. A new evaluation algorithm is presented to quantify the structure in terms of mass thicknesses or if the density is known in terms of real thicknesses. The quantification procedure is based on Monte Carlo simulations where calculated k-ratios (calibrated X-ray intensities) are compared to the experimental ones to find the optimal structure. In comparison with an ion milled cross-section, the proposed bevel technique is more sensitive and provides more information about the material's structure.

  20. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation

    PubMed Central

    Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun

    2015-01-01

    The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695

  1. A Monte Carlo analysis of the liquid xenon TPC as gamma ray telescope

    NASA Technical Reports Server (NTRS)

    Aprile, E.; Bolotnikov, A.; Chen, D.; Mukherjee, R.

    1992-01-01

    Extensive Monte Carlo modeling of a coded aperture x ray telescope based on a high resolution liquid xenon TPC has been performed. Results on efficiency, background reduction capability and source flux sensitivity are presented. We discuss in particular the development of a reconstruction algorithm for events with multiple interaction points. From the energy and spatial information, the kinematics of Compton scattering is used to identify and reduce background events, as well as to improve the detector response in the few MeV region. Assuming a spatial resolution of 1 mm RMS and an energy resolution of 4.5 percent FWHM at 1 MeV, the algorithm is capable of reducing by an order of magnitude the background rate expected at balloon altitude, thus significantly improving the telescope sensitivity.

  2. Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1991-01-01

    A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.

  3. Detecting Sudden Gains during Treatment of Major Depressive Disorder: Cautions from a Monte Carlo Analysis

    PubMed Central

    Vittengl, Jeffrey R.; Clark, Lee Anna; Thase, Michael E.; Jarrett, Robin B.

    2015-01-01

    Sudden gains are relatively large, quick, stable drops in symptom scores during treatment of depression that may (or may not) signal important therapeutic events. We review what is known and unknown currently about the prevalence, causes, and outcomes of sudden gains. We argue that valid identification of sudden gains (vs. random fluctuations in symptoms and gradual gains) is prerequisite to their understanding. In Monte Carlo simulations, three popular criterion sets showed inadequate power to detect sudden gains and many false positives due to (a) testing multiple intervals for sudden gains, (b) finite retest reliability of symptom measures, and (c) failure to account for gradual gains. Sudden gains in published clinical datasets appear similar in form and frequency to false positives in the simulations. We discuss the need to develop psychometrically sound methods to detect sudden gains and to differentiate sudden from random and gradual gains. PMID:26478724

  4. Analysis of aerial survey data on Florida manatee using Markov chain Monte Carlo.

    PubMed

    Craig, B A; Newton, M A; Garrott, R A; Reynolds, J E; Wilcox, J R

    1997-06-01

    We assess population trends of the Atlantic coast population of Florida manatee, Trichechus manatus latirostris, by reanalyzing aerial survey data collected between 1982 and 1992. To do so, we develop an explicit biological model that accounts for the method by which the manatees are counted, the mammals' movement between surveys, and the behavior of the population total over time. Bayesian inference, enabled by Markov chain Monte Carlo, is used to combine the survey data with the biological model. We compute marginal posterior distributions for all model parameters and predictive distributions for future counts. Several conclusions, such as a decreasing population growth rate and low sighting probabilities, are consistent across different prior specifications. PMID:9192449

  5. Photoelectric Franck-Hertz experiment and its kinetic analysis by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Magyar, Péter; Korolov, Ihor; Donkó, Zoltán

    2012-05-01

    The electrical characteristics of a photoelectric Franck-Hertz cell are measured in argon gas over a wide range of pressure, covering conditions where elastic collisions play an important role, as well as conditions where ionization becomes significant. Photoelectron pulses are induced by the fourth harmonic UV light of a diode-pumped Nd:YAG laser. The electron kinetics, which is far more complex compared to the naive picture of the Franck-Hertz experiment, is analyzed via Monte Carlo simulation. The computations provide the electrical characteristics of the cell, the energy and velocity distribution functions, and the transport parameters of the electrons, as well as the rate coefficients of different elementary processes. A good agreement is obtained between the cell's measured and calculated electrical characteristics, the peculiarities of which are understood by the simulation studies.

  6. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data. PMID

  7. Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang

    2015-03-01

    In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.

  8. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  9. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    SciTech Connect

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-12-31

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  10. Monte Carlo optimization of sample dimensions of an 241Am Be source-based PGNAA setup for water rejects analysis

    NASA Astrophysics Data System (ADS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.

    2007-07-01

    The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  11. Effects of flight instrumentation errors on the estimation of aircraft stability and control derivatives. [including Monte Carlo analysis

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.; Hodge, W. F.

    1974-01-01

    An error analysis program based on an output error estimation method was used to evaluate the effects of sensor and instrumentation errors on the estimation of aircraft stability and control derivatives. A Monte Carlo analysis was performed using simulated flight data for a high performance military aircraft, a large commercial transport, and a small general aviation aircraft for typical cruise flight conditions. The effects of varying the input sequence and combinations of the sensor and instrumentation errors were investigated. The results indicate that both the parameter accuracy and the corresponding measurement trajectory fit error can be significantly affected. Of the error sources considered, instrumentation lags and control measurement errors were found to be most significant.

  12. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    PubMed

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. PMID:25459850

  13. Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics

    PubMed Central

    Tewari, Shivendra G.; Zhou, Yifan; Otto, Bradley J.; Dash, Ranjan K.; Kwok, Wai-Meng; Beard, Daniel A.

    2015-01-01

    The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance. PMID:25628567

  14. Time series analysis and Monte Carlo methods for eigenvalue separation in neutron multiplication problems

    SciTech Connect

    Nease, Brian R. Ueki, Taro

    2009-12-10

    A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.

  15. Monte Carlo analysis of a monolithic interconnected module with a back surface reflector

    SciTech Connect

    Ballinger, C.T.; Charache, G.W.; Murray, C.S.

    1998-10-01

    Recently, the photon Monte Carlo code, RACER-X, was modified to include wave-length dependent absorption coefficients and indices of refraction. This work was done in an effort to increase the code`s capabilities to be more applicable to a wider range of problems. These new features make RACER-X useful for analyzing devices like monolithic interconnected modules (MIMs) which have etched surface features and incorporates a back surface reflector (BSR) for spectral control. A series of calculations were performed on various MIM structures to determine the impact that surface features and component reflectivities have on spectral utilization. The traditional concern of cavity photonics is replaced with intra-cell photonics in the MIM design. Like the cavity photonic problems previously discussed, small changes in optical properties and/or geometry can lead to large changes in spectral utilization. The calculations show that seemingly innocuous surface features (e.g., trenches and grid lines) can significantly reduce the spectral utilization due to the non-normal incident photon flux. Photons that enter the device through a trench edge are refracted onto a trajectory where they will not escape. This leads to a reduction in the number of reflected below bandgap photons that return to the radiator and reduce the spectral utilization. In addition, trenches expose a lateral conduction layer in this particular series of calculations which increase the absorption of above bandgap photons in inactive material.

  16. Gamma-ray spectrometry analysis of pebble bed reactor fuel using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chen, Jianwei; Hawari, Ayman I.; Zhao, Zhongxiang; Su, Bingjing

    2003-06-01

    Monte Carlo simulations were used to study the gamma-ray spectra of pebble bed reactor fuel at various levels of burnup. A fuel depletion calculation was performed using the ORIGEN2.1 code, which yielded the gamma-ray source term that was introduced into the input of an MCNP4C simulation. The simulation assumed the use of a 100% efficient high-purity coaxial germanium (HPGe) detector, a pebble placed at a distance of 100 cm from the detector, and accounted for Gaussian broadening of the gamma-ray peaks. Previously, it was shown that 137Cs, 60Co (introduced as a dopant), and 134Cs are the relevant burnup indicators. The results show that the 662 keV line of 137Cs lies in close proximity to the intense 658 keV of 197Nb, which results in spectral interference between the lines. However, the 1333 keV line of 60Co, and selected 134Cs lines (e.g., at 605 keV) are free from spectral interference, which enhances the possibility of their utilization as relative burnup indicators.

  17. Monte Carlo analysis on probe performance for endoscopic diffuse optical spectroscopy of tubular organ

    NASA Astrophysics Data System (ADS)

    Zhang, Yunyao; Zhu, Jingping; Cui, Weiwen; Nie, Wei; Li, Jie; Xu, Zhenghong

    2015-03-01

    We investigated the performance of endoscopic diffuse optical spectroscopy probes with circular or linear fiber arrangements for tubular organ cancer detection. Probe performance was measured by penetration depth. A Monte Carlo model was employed to simulate light transport in the hollow cylinder that both emits and receives light from the inner boundary of the sample. The influence of fiber configurations and tissue optical properties on penetration depth was simulated. The results show that under the same condition, probes with circular fiber arrangement penetrate deeper than probes with linear fiber arrangement, and the difference between the two probes' penetration depth decreases with an increase in the 'distance between source and detector (SD)' and the radius of the probe. Other results show that the penetration depths and their differences both decrease with an increase in the absorption coefficient and the reduced scattering coefficient but remain constant with changes in the anisotropy factor. Moreover, the penetration depth was more affected by the absorption coefficient than the reduced scattering coefficient. It turns out that in NIR band, probes with linear fiber arrangements are more appropriate for diagnosing superficial cancers, whereas probes with circular fiber arrangements should be chosen for diagnosing adenocarcinoma. But in UV-VIS band, the two probe configurations exhibit nearly the same. These results are useful in guiding endoscopic diffuse optical spectroscopy-based diagnosis for esophageal, cervical, colorectal and other cancers.

  18. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    SciTech Connect

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.

  19. IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng

    2014-11-01

    The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.

  20. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    DOE PAGESBeta

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less

  1. Dose Modification Factor Analysis of Multi-Lumen Brachytherapy Applicator with Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Williams, Eric Alan

    Multi-lumen applicators like the Contura (SenoRx, Inc.) are used in partial breast irradiation (PBI) brachytherapy in instances where asymmetric dose distributions are desired, for example, when the applicator surface-to-skin thickness is small (<7mm). In these instances, the air outside the patient and the lung act as a poor scattering medium, scattering less dose back into the breast tissue which affects the dose distribution. Many commercial treatment planning systems do not correct for tissue heterogeneity, which results in inaccuracies in the planned dose distribution. This deviation has been quantified as the dose modification factor (DMF), equal to the ratio of the dose rate at 1cm beyond the applicator surface, with homogenous medium, to the dose rate at 1cm with heterogeneous medium. This investigation intends to model the Contura applicator with the Monte Carlo N-Particle code (MCNP, Los Alamos National Labs), determine a DMF through simulation, and correlate to previous measurements. Taking all geometrical considerations into account, an accurate model of the Contura balloon applicator was created in MCNP. This model was used to run simulations of symmetric and asymmetric plans. The dose modification factor was found to be dependent on the simulated water phantom geometry, with cuboid geometry yielding a max DMF of 1.0664. The same measurements taken using a spherical geometry water phantom gave a DMF of 1.1221. It was also seen that the difference in DMF between symmetric and asymmetric plans using the Contura applicator is minimal.

  2. Monte Carlo and deterministic analysis of a University of Virginia BNCT facility

    NASA Astrophysics Data System (ADS)

    Burns, Thomas D.; Hubbard, Thomas R.; Rydin, R. A.; Reynolds, A. B.

    1997-02-01

    A comprehensive effort is underway to design a high- performance BNCT facility at the 2 MW University of Virginia research reactor. This endeavor includes detailed core criticality and leakage calculations, coupled neutron/photon transport analyses, and dosimetry computations. Detailed geometries are modeled with MCNP for both the core and filter, as well as for phantom dosimetry studies, whereas the symmetric and deep-penetration problem of the filter/collimeter design is solved with the DORT code. Final filter configurations are evaluated with both stochastic and deterministic methods, and the results are compared and synthesized. The complementary use of these two computational methods yields a broader insight into the problem than can be achieved by using either method alone. Calculations show that certain adjustments to the core configuration increase the leakage to the filter thereby improving beam performance. Increased performance is also achieved by strategic shaping, placement, and optimization of neutral reflectors and filtering materials in the beam tube. Results of numerous optimization studies, which led to the final beam design, are presented. Ongoing work includes integration of recently developed treatment planning codes from INEL into the dosimetry analyses. New methods of coupling discrete ordinates and adjoint Monte Carlo calculations for medical applications are also under development.

  3. MicroExposure Monte Carlo analysis modeling PCB exposures through fish ingestion from the Upper Hudson River

    SciTech Connect

    Ebert, E.S.; Price, P.S.; McCrodden, J.L.; Ducey, J.S.; Keenan, R.E.

    1995-12-31

    The risks associated with exposures to mixtures of polychlorinated biphenyls (PCBs) from the consumption of fish in the vicinity of Superfund sites traditionally have been evaluated by using simple algebraic equations to calculate the dose received by a highly successful angler. A Lifetime Average Daily Dose (LADD) is estimated using default assumptions concerning the quantity of fish consumed, an angler`s body weight, an angler`s exposure duration, and a static measure of PCB levels in fish. Recent changes in EPA`s policies and guidelines, however, have focused on improving the management of environmental risks by providing decision-makers with a distribution of possible risks rather than a single point estimate. MicroExposure Monte Carlo analysis is a recent development in probabilistic exposure assessment in which a LADD for a given angler is calculated as the sum of many individual doses received over the course of a lifetime from individual exposure events. Data on concentrations of PCBs in individual fish are thereby incorporated into the analysis as are other temporal changes in the various exposure parameters. In this paper, the MicroExposure Monte Carlo model is applied to characterize the distribution of PCB dose rates in a hypothetical population of recreational anglers who might potentially consume fish from the Upper Hudson River in the absence of a fishing ban. The analysis uses probabilistic techniques to account for temporal and age-related changes in exposure parameters and as a means of properly considering meal-to-meal variation in fish concentrations, cooking practices, and fish species.

  4. Comparative analysis of discrete and continuous absorption weighting estimators used in Monte Carlo simulations of radiative transport in turbid media.

    PubMed

    Hayakawa, Carole K; Spanier, Jerome; Venugopalan, Vasan

    2014-02-01

    We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029

  5. Comparative analysis of discrete and continuous absorption weighting estimators used in Monte Carlo simulations of radiative transport in turbid media

    PubMed Central

    Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan

    2014-01-01

    We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029

  6. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    NASA Astrophysics Data System (ADS)

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution

  7. PDF Weaving - Linking Inventory Data and Monte Carlo Uncertainty Analysis in the Study of how Disturbance Affects Forest Carbon Storage

    NASA Astrophysics Data System (ADS)

    Healey, S. P.; Patterson, P.; Garrard, C.

    2014-12-01

    Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon

  8. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    SciTech Connect

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run with little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.

  9. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    SciTech Connect

    PADOVANI, ENRICO

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.

  10. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    Energy Science and Technology Software Center (ESTSC)

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, wasmore » developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.« less

  11. Monte Carlo analysis of obstructed diffusion in three dimensions: application to molecular diffusion in organelles.

    PubMed

    Olveczky, B P; Verkman, A S

    1998-05-01

    Molecular transport in the aqueous lumen of organelles involves diffusion in a confined compartment with complex geometry. Monte Carlo simulations of particle diffusion in three dimensions were carried out to evaluate the influence of organelle structure on diffusive transport and to relate experimental photobleaching data to intrinsic diffusion coefficients. Two organelle structures were modeled: a mitochondria-like long closed cylinder containing fixed luminal obstructions of variable number and size, and an endoplasmic reticulum-like network of interconnected cylinders of variable diameter and density. Trajectories were computed in each simulation for >10(5) particles, generally for >10(5) time steps. Computed time-dependent concentration profiles agreed quantitatively with analytical solutions of the diffusion equation for simple geometries. For mitochondria-like cylinders, significant slowing of diffusion required large or wide single obstacles, or multiple obstacles. In simulated spot photobleaching experiments, a approximately 25% decrease in apparent diffusive transport rate (defined by the time to 75% fluorescence recovery) was found for a single thin transverse obstacle occluding 93% of lumen area, a single 53%-occluding obstacle of width 16 lattice points (8% of cylinder length), 10 equally spaced 53% obstacles alternately occluding opposite halves of the cylinder lumen, or particle binding to walls (with mean residence time = 10 time steps). Recovery curve shape with obstacles showed long tails indicating anomalous diffusion. Simulations also demonstrated the utility of measurement of fluorescence depletion at a spot distant from the bleach zone. For a reticulum-like network, particle diffusive transport was mildly reduced from that in unobstructed three-dimensional space. In simulated photobleaching experiments, apparent diffusive transport was decreased by 39-60% in reticular structures in which 90-97% of space was occluded. These computations provide

  12. Three-dimensional quantitative dose reduction analysis in MammoSite balloon by Monte Carlo calculations.

    PubMed

    Zhang, Zhengdong; Parsai, E Ishmael; Feldmeier, John J

    2007-01-01

    Current treatment planning systems (TPSs) for partial breast irradiation using the MammoSite brachytherapy applicator (Cytyc Corporation, Marlborough, MA) often neglect the effect of inhomogeneity, leading to potential inaccuracies in dose distributions. Previous publications either have studied only a planar dose perturbation along the bisector of the source or have paid little attention to the anisotropy effect of the system. In the present study, we investigated the attenuation-corrected radial dose and anisotropy functions in a form parallel to the updated American Association of Physicists in Medicine TG-43 formalism. This work quantitatively delineates the inaccuracies in dose distributions in three-dimensional space. Monte Carlo N-particle transport code simulations in coupled photon-electron transport were used to quantify the changes in dose deposition and distribution caused by the increased attenuation coefficient of iodine-based contrast solution. The source geometry was that of the VariSource wire model VS2000 (Varian Medical Systems, Palo Alto, CA). The concentration of the iodine-based solution was varied from 5% to 25% by volume, a range recommended by the balloon's manufacturer. Balloon diameters of 4, 5, and 6 cm were simulated. Dose rates at the typical prescription line (1 cm away from the balloon surface) were determined for various polar angles. The computations showed that the dose rate reduction throughout the entire region of interest ranged from 0.64% for the smallest balloon diameter and contrast concentration to 6.17% for the largest balloon diameter and contrast concentration. The corrected radial dose function has a predominant influence on dose reduction, but the corrected anisotropy functions explain only the effect at the MammoSite system poles. By applying the corrected radial dose and anisotropy functions to TPSs, the attenuation effect can be reduced to the minimum. PMID:18449153

  13. Recovering the inflationary potential: An analysis using flow methods and Markov chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Powell, Brian A.

    Since its inception in 1980 by Guth [1], inflation has emerged as the dominant paradigm for describing the physics of the early universe. While inflation has matured theoretically over two decades, it has only recently begun to be rigorously tested observationally. Measurements of the cosmic microwave background (CMB) and large-scale structure surveys (LSS) have begun to unravel the mysteries of the inflationary epoch with exquisite and unprecedented accuracy. This thesis is a contribution to the effort of reconstructing the physics of inflation. This information is largely encoded in the potential energy function of the inflaton, the field that drives the inflationary expansion. With little theoretical guidance as to the probable form of this potential, reconstruction is a predominantly data-driven endeavor. This thesis presents an investigation of the constrainability of the inflaton potential given current CMB and LSS data. We develop a methodology based on the inflationary flow formalism that provides an assessment of our current ability to resolve the form of the inflaton potential in the face of experimental and statistical error. We find that there is uncertainty regarding the initial dynamics of the inflaton field, related to the poor constraints that can be drawn on the primordial power spectrum on large scales. We also investigate the future prospects of potential reconstruction, as might be expected when data from ESA's Planck Surveyor becomes available. We develop an approach that utilizes Markov chain Monte Carlo to analyze the statistical properties of the inflaton potential. Besides providing constraints on the parameters of the potential, this method makes it possible to perform model selection on the inflationary model space. While future data will likely determine the general features of the inflaton, there will likely be many different models that remain good fits to the data. Bayesian model selection will then be needed to draw comparisons

  14. Estimation of water distribution and degradation mechanisms in polymer electrolyte membrane fuel cell gas diffusion layers using a 3D Monte Carlo model

    NASA Astrophysics Data System (ADS)

    Seidenberger, K.; Wilhelm, F.; Schmitt, T.; Lehnert, W.; Scholta, J.

    Understanding of both water management in PEM fuel cells and degradation mechanisms of the gas diffusion layer (GDL) and their mutual impact is still at least incomplete. Different modelling approaches contribute to gain deeper insight into the processes occurring during fuel cell operation. Considering the GDL, the models can help to obtain information about the distribution of liquid water within the material. Especially, flooded regions can be identified, and the water distribution can be linked to the system geometry. Employed for material development, this information can help to increase the life time of the GDL as a fuel cell component and the fuel cell as the entire system. The Monte Carlo (MC) model presented here helps to simulate and analyse the water household in PEM fuel cell GDLs. This model comprises a three-dimensional, voxel-based representation of the GDL substrate, a section of the flowfield channel and the corresponding rib. Information on the water distribution within the substrate part of the GDL can be estimated.

  15. Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Dytman, Steven

    2011-10-01

    Every neutrino experiment requires a Monte Carlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

  16. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    NASA Astrophysics Data System (ADS)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  17. Clinical implementation of the Peregrine Monte Carlo dose calculations system for photon beam therapy

    SciTech Connect

    Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D

    1999-07-01

    PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.

  18. Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived

    NASA Astrophysics Data System (ADS)

    Cottrell, E.; Walter, M. J.

    2009-12-01

    The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most

  19. Monte Carlo analysis of the Titan III/Transfer Orbit Stage guidance system for the Mars Observer mission

    NASA Technical Reports Server (NTRS)

    Bell, Stephen C.; Ginsburg, Marc A.; Rao, Prabhakara P.

    1993-01-01

    An important part of space launch vehicle mission planning for a planetary mission is the integrated analysis of guidance and performance dispersions for both booster and upper stage vehicles. For the Mars Observer mission, an integrated trajectory analysis was used to maximize the scientific payload and to minimize injection errors by optimizing the energy management of both vehicles. This was accomplished by designing the Titan III booster vehicle to inject into a hyperbolic departure plane, and the Transfer Orbit Stage (TOS) to correct any booster dispersions. An integrated Monte Carlo analysis of the performance and guidance dispersions of both vehicles provided sensitivities, an evaluation of their guidance schemes and an injection error covariance matrix. The polynomial guidance schemes used for the Titan III variable flight azimuth computations and the TOS solid rocket motor ignition time and burn direction derivations accounted for a wide variation of launch times, performance dispersions, and target conditions. The Mars Observer spacecraft was launched on 25 September 1992 on the Titan III/TOS vehicle. The post flight analysis indicated that a near perfect park orbit injection was achieved, followed by a trans-Mars injection with less than 2sigma errors.

  20. Monte Carlo portal dosimetry

    SciTech Connect

    Chin, P.W. . E-mail: mary.chin@physics.org

    2005-10-15

    This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

  1. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. PMID:21764476

  2. Monte Carlo simulation of Li+ motion in polyethylene based on polarization energy calculations and informed by data compression analysis

    NASA Astrophysics Data System (ADS)

    Scarle, S.; Sterzel, M.; Eilmes, A.; Munn, R. W.

    2005-10-01

    We present an n-fold way kinetic Monte Carlo simulation of the hopping motion of Li+ ions in polyethylene on a grid of mesh 0.36Å superimposed on the voids of the rigid polymer. The structure of the polymer is derived from a higher-order simulation, and the energy of the ion at each site is derived by the self-consistent polarization field method. The ion motion evolves in time from free flight through anomalous diffusion to normal diffusion, with the average energy tending to decrease with increasing temperature through thermal annealing. We compare the results with those of hopping models with probabilistic energy distributions of increasing complexity by analyzing the mean-square displacement and the average energy of an ensemble of ions. The Gumbel distribution describes the ion energy statistics in this system better than the usual Gaussian distribution does; including energy correlation greatly affects the ion dynamics. The analysis uses the standard data compression program GZIP, which proves to be a powerful tool for data analysis by giving a measure of recurrences in the ion path.

  3. Monte Carlo modelling of TRIGA research reactor

    NASA Astrophysics Data System (ADS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  4. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    SciTech Connect

    Jewell, J. B.; O'Dwyer, I. J.; Huey, Greg; Gorski, K. M.; Eriksen, H. K.; Wandelt, B. D. E-mail: h.k.k.eriksen@astro.uio.no

    2009-05-20

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C {sub l}, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of {chi}{sup 2} between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  5. Markov Chain Monte Carlo Joint Analysis of Chandra X-Ray Imaging Spectroscopy and Sunyaev-Zel'dovich Effect Data

    NASA Technical Reports Server (NTRS)

    Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.

  6. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)

    NASA Astrophysics Data System (ADS)

    Hansson, Marie; Isaksson, Mats

    2007-04-01

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  7. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis

    PubMed Central

    Sattar, Ahmed M.A.; Raslan, Yasser M.

    2013-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude. PMID:25685476

  8. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: The Ti4O7 Magneli phase

    DOE PAGESBeta

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; Zhong, Xiaoling; Kent, Paul R. C.; Heinonen, Olle

    2016-06-07

    The Magneli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation. Our resultsmore » confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less

  9. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners

    PubMed Central

    Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.

    2015-01-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101

  10. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: the Ti4O7 Magnéli phase.

    PubMed

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T; Zhong, Xiaoliang; Kent, Paul R C; Heinonen, Olle

    2016-07-21

    The Magnéli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low-lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate quantum Monte Carlo methods. We compare our results to those obtained from density functional theory-based methods that include approximate corrections for exchange and correlation. Our results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. A detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps. PMID:27334262

  11. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis.

    PubMed

    Sattar, Ahmed M A; Raslan, Yasser M

    2014-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude. PMID:25685476

  12. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    NASA Astrophysics Data System (ADS)

    Olivero, P.; Forneris, J.; Gamarra, P.; Jakšić, M.; Giudice, A. Lo; Manfredotti, C.; Pastuović, Ž.; Skukan, N.; Vittone, E.

    2011-10-01

    The transport properties of a 4H-SiC Schottky diode have been investigated by the ion beam induced charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 μm. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combination of these two computational methods enabled an exhaustive interpretation of the experimental profiles and allowed an accurate evaluation both of the electrical characteristics of the active region (e.g. electric field profiles) and of basic transport parameters (i.e. diffusion length and minority carrier lifetime).

  13. Monte Carlo simulation of x-ray fluorescence analysis of gold in kidney using 99mTc radiopharmaceutical

    NASA Astrophysics Data System (ADS)

    Mahdavi, Naser; Shamsaei, Mojtaba; Shafaei, Mostafa; Rabiei, Ali

    2013-10-01

    The objective of this study was to design a system in order to analyze gold and other heavy elements in internal organs using in vivo x-ray fluorescence (XRF) analysis. Monte Carlo N Particle code MCNP was used to simulate phantoms and sources. A source of 99mTc was simulated in kidney to excite the gold x-rays. Changes in K XRF response due to variations in tissue thickness overlying the kidney at the measurement site were investigated. Different simulations having tissue thicknesses of 20, 30, 40, 50 and 60 mm were performed. Kα1 and Kα2 for all depths were measured. The linearity of the XRF system was also studied by increasing the gold concentration in the kidney phantom from 0 to 500 µg g-1 kidney tissue. The results show that gold concentration between 3 and 10 µg g-1 kidney tissue can be detected for distance between the skin and the kidney surface of 20-60 mm. The study also made a comparison between the skin doses for the source outside and inside the phantom.

  14. Monte Carlo and quasi-Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel E.

    Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.

  15. Revised methods for few-group cross sections generation in the Serpent Monte Carlo code

    SciTech Connect

    Fridman, E.; Leppaenen, J.

    2012-07-01

    This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)

  16. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence.

    PubMed

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html]. PMID:27045371

  17. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence

    PubMed Central

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The “missing heritability” has been suggested to be due to lack of studies focused on epistasis, also called gene–gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called “Epistasis Test in Meta-Analysis” (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene–gene interactions in the renin–angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html]. PMID:27045371

  18. Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model

    ERIC Educational Resources Information Center

    Owens, Corina M.

    2011-01-01

    Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…

  19. Modelling of dissolved oxygen in the Danube River using artificial neural networks and Monte Carlo Simulation uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Antanasijević, Davor; Pocajt, Viktor; Perić-Grujić, Aleksandra; Ristić, Mirjana

    2014-11-01

    This paper describes the training, validation, testing and uncertainty analysis of general regression neural network (GRNN) models for the forecasting of dissolved oxygen (DO) in the Danube River. The main objectives of this work were to determine the optimum data normalization and input selection techniques, the determination of the relative importance of uncertainty in different input variables, as well as the uncertainty analysis of model results using the Monte Carlo Simulation (MCS) technique. Min-max, median, z-score, sigmoid and tanh were validated as normalization techniques, whilst the variance inflation factor, correlation analysis and genetic algorithm were tested as input selection techniques. As inputs, the GRNN models used 19 water quality variables, measured in the river water each month at 17 different sites over a period of 9 years. The best results were obtained using min-max normalized data and the input selection based on the correlation between DO and dependent variables, which provided the most accurate GRNN model, and in combination the smallest number of inputs: Temperature, pH, HCO3-, SO42-, NO3-N, Hardness, Na, Cl-, Conductivity and Alkalinity. The results show that the correlation coefficient between measured and predicted DO values is 0.85. The inputs with the greatest effect on the GRNN model (arranged in descending order) were T, pH, HCO3-, SO42- and NO3-N. Of all inputs, variability of temperature had the greatest influence on the variability of DO content in river body, with the DO decreasing at a rate similar to the theoretical DO decreasing rate relating to temperature. The uncertainty analysis of the model results demonstrate that the GRNN can effectively forecast the DO content, since the distribution of model results are very similar to the corresponding distribution of real data.

  20. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  1. Water and tissue equivalence of a new PRESAGE{sup Registered-Sign} formulation for 3D proton beam dosimetry: A Monte Carlo study

    SciTech Connect

    Gorjiara, Tina; Kuncic, Zdenka; Doran, Simon; Adamovics, John; Baldock, Clive

    2012-11-15

    Purpose: To evaluate the water and tissue equivalence of a new PRESAGE{sup Registered-Sign} 3D dosimeter for proton therapy. Methods: The GEANT4 software toolkit was used to calculate and compare total dose delivered by a proton beam with mean energy 62 MeV in a PRESAGE{sup Registered-Sign} dosimeter, water, and soft tissue. The dose delivered by primary protons and secondary particles was calculated. Depth-dose profiles and isodose contours of deposited energy were compared for the materials of interest. Results: The proton beam range was found to be Almost-Equal-To 27 mm for PRESAGE{sup Registered-Sign }, 29.9 mm for soft tissue, and 30.5 mm for water. This can be attributed to the lower collisional stopping power of water compared to soft tissue and PRESAGE{sup Registered-Sign }. The difference between total dose delivered in PRESAGE{sup Registered-Sign} and total dose delivered in water or tissue is less than 2% across the entire water/tissue equivalent range of the proton beam. The largest difference between total dose in PRESAGE{sup Registered-Sign} and total dose in water is 1.4%, while for soft tissue it is 1.8%. In both cases, this occurs at the distal end of the beam. Nevertheless, the authors find that PRESAGE{sup Registered-Sign} dosimeter is overall more tissue-equivalent than water-equivalent before the Bragg peak. After the Bragg peak, the differences in the depth doses are found to be due to differences in primary proton energy deposition; PRESAGE{sup Registered-Sign} and soft tissue stop protons more rapidly than water. The dose delivered by secondary electrons in the PRESAGE{sup Registered-Sign} differs by less than 1% from that in soft tissue and water. The contribution of secondary particles to the total dose is less than 4% for electrons and Almost-Equal-To 1% for protons in all the materials of interest. Conclusions: These results demonstrate that the new PRESAGE{sup Registered-Sign} formula may be considered both a tissue- and water

  2. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    SciTech Connect

    Yu Maolin; Du, R.

    2005-08-05

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.

  3. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    NASA Astrophysics Data System (ADS)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model

  4. A Monte Carlo Investigation of the Analysis of Variance Applied to Non-Independent Bernoulli Variates.

    ERIC Educational Resources Information Center

    Draper, John F., Jr.

    The applicability of the Analysis of Variance, ANOVA, procedures to the analysis of dichotomous repeated measure data is described. The design models for which data were simulated in this investigation were chosen to represent simple cases of two experimental situations: situation one, in which subjects' responses to a single randomly selected set…

  5. Leasing policy and the rate of petroleum development: analysis with a Monte Carlo simulation model

    SciTech Connect

    Abbey, D; Bivins, R

    1982-03-01

    The study has two objectives: first, to consider whether alternative leasing systems are desirable to speed the rate of oil and gas exploration and development in frontier basins; second, to evaluate the Petroleum Activity and Decision Simulation model developed by the US Department of the Interior for economic and land use planning and for policy analysis. Analysis of the model involved structural variation of the geology, exploration, and discovery submodels and also involved a formal sensitivity analysis using the Latin Hypercube Sampling Method. We report the rate of exploration, discovery, and petroleum output under a variety of price, leasing policy, and tax regimes.

  6. BOOTSTRAPPING AND MONTE CARLO METHODS OF POWER ANALYSIS USED TO ESTABLISH CONDITION CATEGORIES FOR BIOTIC INDICES

    EPA Science Inventory

    Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...

  7. Monte Carlo Criticality Analysis of Simple Geometrics COntaining Tungsten Rhenium Alloys Engrained with Uranium Dioxide and Uranium Mononitride

    SciTech Connect

    Jonathan A. Webb; Indrajit Charit

    2011-08-01

    The critical mass and dimensions of simple geometries containing highly enriched uraniumdioxide (UO2) and uraniummononitride (UN) encapsulated in tungsten-rhenium alloys are determined using MCNP5 criticality calculations. Spheres as well as cylinders with length to radius ratios of 1.82 are computationally built to consist of 60 vol.% fuel and 40 vol.% metal matrix. Within the geometries the uranium is enriched to 93 wt.% uranium-235 and the rhenium content within the metal alloy was modeled over a range of 0 to 30 at.%. The spheres containing UO2 were determined to have a critical radius of 18.29 cm to 19.11 cm and a critical mass ranging from 366 kg to 424 kg. The cylinders containing UO2 were found to have a critical radius ranging from 17.07 cm to 17.844 cm with a corresponding critical mass of 406 kg to 471 kg. Spheres engrained with UN were determined to have a critical radius ranging from 14.82 cm to 15.19 cm and a critical mass between 222 kg and 242 kg. Cylinders which were engrained with UN were determined to have a critical radius ranging from 13.811 cm to 14.155 cm with a corresponding critical mass of 245 kg to 267 kg. The critical geometries were also computationally submerged in a neutronaically infinite medium of fresh water to determine the effects of rhenium addition on criticality accidents due to water submersion. The monte carlo analysis demonstrated that rhenium addition of up to 30 at.% can reduce the excess reactivity due to water submersion by up to $5.07 for UO2 fueled cylinders, $3.87 for UO2 fueled spheres and approximately $3.00 for UN fueled spheres and cylinders.

  8. A Comparison of Alternatives to Conducting Monte Carlo Analyses for Determining Parallel Analysis Criteria.

    ERIC Educational Resources Information Center

    Lautenschlager, Gary J.

    1989-01-01

    Procedures for implementing parallel analysis (PA) criteria in practice were compared, examining regression equation methods that can be used to estimate random data eigenvalues from known values of the sample size and number of variables. More internally accurate methods for determining PA criteria are presented. (SLD)

  9. Factor Analysis with Ordinal Indicators: A Monte Carlo Study Comparing DWLS and ULS Estimation

    ERIC Educational Resources Information Center

    Forero, Carlos G.; Maydeu-Olivares, Alberto; Gallardo-Pujol, David

    2009-01-01

    Factor analysis models with ordinal indicators are often estimated using a 3-stage procedure where the last stage involves obtaining parameter estimates by least squares from the sample polychoric correlations. A simulation study involving 324 conditions (1,000 replications per condition) was performed to compare the performance of diagonally…

  10. Monte Carlo Analysis For The Inverse Problem For Motor PWM Control

    NASA Astrophysics Data System (ADS)

    Zhan, Wei

    2009-05-01

    The sensitivity of the steady-state average motor speed for Pulse Width Modulation (PWM) control of DC permanent magnetic motors is analyzed in this paper. Using Maple's symbolic computation capability, an analytic form for the steady-state average motor speed is derived based on a first-principle model. Compared to previous result based on a MATLAB/Simulink model, this analytic form greatly reduces the time required to conduct statistical analysis. As a result, five hundred sets of randomly generated motor parameters are used to determine the standard deviation in the steady-state average motor speed as a function of the standard deviation in the motor parameters. This new approach allows for a detailed sensitivity analysis and the design feasibility study for motor PWM control.

  11. Monte Carlo Analysis of Airport Throughput and Traffic Delays Using Self Separation Procedures

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Sturdy, James L.

    2006-01-01

    This paper presents the results of three simulation studies of throughput and delay times of arrival and departure operations performed at non-towered, non-radar airports using self-separation procedures. The studies were conducted as part of the validation process of the Small Aircraft Transportation Systems Higher Volume Operations (SATS HVO) concept and include an analysis of the predicted airport capacity using with different traffic conditions and system constraints under increasing levels of demand. Results show that SATS HVO procedures can dramatically increase capacity at non-towered, non-radar airports and that the concept offers the potential for increasing capacity of the overall air transportation system.

  12. Boltzmann equation and Monte Carlo analysis of electron-electron interactions on electron distributions in nonthermal cold plasmas

    SciTech Connect

    Yousfi, M.; Himoudi, A.; Gaouar, A. )

    1992-12-15

    Electron distribution functions in nonthermal cold plasmas generated by classical electrical discharges have been calculated from a powerful Boltzmann equation solution and an original Monte Carlo simulation. In these two methods both classical (i.e., elastic, inelastic, and superelastic) electron-atom (or molecule) collisions and electron-electron interactions are taken into account. The approximations considered to include long-range (electron-electron) and short-range (electron-atom) interactions in the same Monte Carlo algorithm are first validated by comparing with Boltzmann equation results. Then, the influence of electron-electron interactions on electron distribution functions, swarm parameters, and reaction rates under nonthermal cold plasma conditions are analyzed and discussed as a function of reduced electric field [ital E]/[ital N] and ionization degree [ital n][sub [ital e

  13. Acoustic effects analysis utilizing speckle pattern with fixed-particle Monte Carlo

    NASA Astrophysics Data System (ADS)

    Vakili, Ali; Hollmann, Joseph A.; Holt, R. Glynn; DiMarzio, Charles A.

    2016-03-01

    Optical imaging in a turbid medium is limited because of multiple scattering a photon undergoes while traveling through the medium. Therefore, optical imaging is unable to provide high resolution information deep in the medium. In the case of soft tissue, acoustic waves unlike light, can travel through the medium with negligible scattering. However, acoustic waves cannot provide medically relevant contrast as good as light. Hybrid solutions have been applied to use the benefits of both imaging methods. A focused acoustic wave generates a force inside an acoustically absorbing medium known as acoustic radiation force (ARF). ARF induces particle displacement within the medium. The amount of displacement is a function of mechanical properties of the medium and the applied force. To monitor the displacement induced by the ARF, speckle pattern analysis can be used. The speckle pattern is the result of interfering optical waves with different phases. As light travels through the medium, it undergoes several scattering events. Hence, it generates different scattering paths which depends on the location of the particles. Light waves that travel along these paths have different phases (different optical path lengths). ARF induces displacement to scatterers within the acoustic focal volume, and changes the optical path length. In addition, temperature rise due to conversion of absorbed acoustic energy to heat, changes the index of refraction and therefore, changes the optical path length of the scattering paths. The result is a change in the speckle pattern. Results suggest that the average change in the speckle pattern measures the displacement of particles and temperature rise within the acoustic wave focal area, hence can provide mechanical and thermal properties of the medium.

  14. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  15. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  16. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    SciTech Connect

    Goluoglu, Sedat; Bekar, Kursat B; Wiarda, Dorothea

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  17. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  18. Behavioral Analysis of Visitors to a Medical Institution’s Website Using Markov Chain Monte Carlo Methods

    PubMed Central

    Tani, Yuji

    2016-01-01

    Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the

  19. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (ESTSC)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  20. Predicting hourly air pollutant levels using artificial neural networks coupled with uncertainty analysis by Monte Carlo simulations.

    PubMed

    Arhami, Mohammad; Kamali, Nima; Rajabi, Mohammad Mahdi

    2013-07-01

    Recent progress in developing artificial neural network (ANN) metamodels has paved the way for reliable use of these models in the prediction of air pollutant concentrations in urban atmosphere. However, improvement of prediction performance, proper selection of input parameters and model architecture, and quantification of model uncertainties remain key challenges to their practical use. This study has three main objectives: to select an ensemble of input parameters for ANN metamodels consisting of meteorological variables that are predictable by conventional weather forecast models and variables that properly describe the complex nature of pollutant source conditions in a major city, to optimize the ANN models to achieve the most accurate hourly prediction for a case study (city of Tehran), and to examine a methodology to analyze uncertainties based on ANN and Monte Carlo simulations (MCS). In the current study, the ANNs were constructed to predict criteria pollutants of nitrogen oxides (NOx), nitrogen dioxide (NO2), nitrogen monoxide (NO), ozone (O3), carbon monoxide (CO), and particulate matter with aerodynamic diameter of less than 10 μm (PM10) in Tehran based on the data collected at a monitoring station in the densely populated central area of the city. The best combination of input variables was comprehensively investigated taking into account the predictability of meteorological input variables and the study of model performance, correlation coefficients, and spectral analysis. Among numerous meteorological variables, wind speed, air temperature, relative humidity and wind direction were chosen as input variables for the ANN models. The complex nature of pollutant source conditions was reflected through the use of hour of the day and month of the year as input variables and the development of different models for each day of the week. After that, ANN models were constructed and validated, and a methodology of computing prediction intervals (PI) and

  1. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  2. Monte Carlo dose mapping on deforming anatomy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Siebers, Jeffrey V.

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

  3. An analysis of the OI 1304 A dayglow using a Monte Carlo resonant scattering model with partial frequency redistribution

    NASA Technical Reports Server (NTRS)

    Meier, R. R.; Lee, J.-S.

    1982-01-01

    The transport of resonance radiation under optically thick conditions is shown to be accurately described by a Monte Carlo model of the atomic oxygen 1304 A airglow triplet in which partial frequency redistribution, temperature gradients, pure absorption and multilevel scattering are accounted for. All features of the data can be explained by photoelectron impact excitation and the resonant scattering of sunlight, where the latter source dominates below 100 and above 500 km and is stronger at intermediate altitudes than previously thought. It is concluded that the OI 1304 A emission can be used in studies of excitation processes and atomic oxygen densities in planetary atmospheres.

  4. Improved method for implicit Monte Carlo

    SciTech Connect

    Brown, F. B.; Martin, W. R.

    2001-01-01

    The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and the accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.

  5. VESTA 2.1.5 - Monte Carlo Depletion Interface Code; AURORA 1.0.0 - Depletion Analysis Tool.

    Energy Science and Technology Software Center (ESTSC)

    2013-03-21

    Version 01 RSICC is authorized to distribute VESTA 2.1.5 for research and education purposes only. Requesters from NEA Data Bank member countries are advised to order VESTA 2.1.5 from the NEA Data Bank. Non-commercial and non-profit users from other OECD member countries (specifically Canada and the United States) may order VESTA 2.1.5 from RSICC. Users from non-OECD member countries and all commercial requesters are advised to contact the IRSN. VESTA is a Monte Carlo depletionmore » interface code that is currently under development at IRSN (France). From its inception, VESTA is intended to be a “generic” interface code so that it will ultimately be capable of using any Monte-Carlo code or depletion module and that can be completely tailored to the user’s needs on practically all aspects of the code. For the current version, VESTA allows for the use of any version of MCNP(X) as the transport module and ORIGEN 2.2 or the built in PHOENIX module as the depletion module. A short overview of the main features of this version of the code is detailed in the Abstract.« less

  6. VESTA 2.1.5 - Monte Carlo Depletion Interface Code; AURORA 1.0.0 - Depletion Analysis Tool.

    SciTech Connect

    HAECK, WIM

    2013-03-21

    Version 01 RSICC is authorized to distribute VESTA 2.1.5 for research and education purposes only. Requesters from NEA Data Bank member countries are advised to order VESTA 2.1.5 from the NEA Data Bank. Non-commercial and non-profit users from other OECD member countries (specifically Canada and the United States) may order VESTA 2.1.5 from RSICC. Users from non-OECD member countries and all commercial requesters are advised to contact the IRSN. VESTA is a Monte Carlo depletion interface code that is currently under development at IRSN (France). From its inception, VESTA is intended to be a “generic” interface code so that it will ultimately be capable of using any Monte-Carlo code or depletion module and that can be completely tailored to the user’s needs on practically all aspects of the code. For the current version, VESTA allows for the use of any version of MCNP(X) as the transport module and ORIGEN 2.2 or the built in PHOENIX module as the depletion module. A short overview of the main features of this version of the code is detailed in the Abstract.

  7. Exhaustive Metropolis Monte Carlo sampling and analysis of polyalanine conformations adopted under the influence of hydrogen bonds.

    PubMed

    Podtelezhnikov, Alexei A; Wild, David L

    2005-10-01

    We propose a novel Metropolis Monte Carlo procedure for protein modeling and analyze the influence of hydrogen bonding on the distribution of polyalanine conformations. We use an atomistic model of the polyalanine chain with rigid and planar polypeptide bonds, and elastic alpha carbon valence geometry. We adopt a simplified energy function in which only hard-sphere repulsion and hydrogen bonding interactions between the atoms are considered. Our Metropolis Monte Carlo procedure utilizes local crankshaft moves and is combined with parallel tempering to exhaustively sample the conformations of 16-mer polyalanine. We confirm that Flory's isolated-pair hypothesis (the steric independence between the dihedral angles of individual amino acids) does not hold true in long polypeptide chains. In addition to 3(10)- and alpha-helices, we identify a kink stabilized by 2 hydrogen bonds with a shared acceptor as a common structural motif. Varying the strength of hydrogen bonds, we induce the helix-coil transition in the model polypeptide chain. We compare the propensities for various hydrogen bonding patterns and determine the degree of cooperativity of hydrogen bond formation in terms of the Hill coefficient. The observed helix-coil transition is also quantified according to Zimm-Bragg theory. PMID:16049911

  8. Quasi-Monte Carlo integration

    SciTech Connect

    Morokoff, W.J.; Caflisch, R.E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol`, and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1/2} in high dimensions. 21 refs., 6 figs., 5 tabs.

  9. Quasi-Monte Carlo Integration

    NASA Astrophysics Data System (ADS)

    Morokoff, William J.; Caflisch, Russel E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol', and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1}/{2} in high dimensions.

  10. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  11. Predictive uncertainty analysis of a highly heterogeneous field-scale groundwater model using null-space Monte Carlo

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2011-12-01

    Quantification of prediction uncertainty resulting from estimated parameters is critical to provide accurate predictive models for field-scale groundwater flow and transport problems. We examine and compare two approaches to defining predictive uncertainty where both approaches utilize pilot points to parameterize spatially heterogeneous fields. The first approach is the independent calibration of multiple initial "seed" fields created through geostatistical simulation and conditioned to observation data, resulting in an ensemble of calibrated property fields that defines uncertainty in the calibrated parameters. The second approach is the null-space Monte Carlo (NSMC) method that employs a decomposition of the Jacobian matrix from a single calibration to define a minimum number of linear combinations of parameters that account for the majority of the sensitivity of the overall calibration to the observed data. Random vectors are applied to the remaining linear combinations of parameters, the null space, to create an ensemble of fields, each of which remains calibrated to the data. We compare these two approaches using a highly-parameterized groundwater model of the Culebra dolomite in southeastern New Mexico. Observation data include two decades of steady-state head measurements and pumping test results. The predictive performance measure is advective travel time from a point to a prescribed boundary. Calibrated parameters at a set of pilot points include transmissivity, the horizontal hydraulic anisotropy, the storativity, and a section of recharge (> 1200 parameters in total). First, we calibrate 200 multiple random seed fields generated through geostatistical simulation conditioned to observation data. The 11 fields that contain the best and worst scenarios in terms of calibration and travel time analysis among the best 100 calibrated results provide a basis for the NSMC method. The NSMC method is used to generate 200 calibration-constrained parameter fields

  12. Monte Carlo simulation for the transport beamline

    SciTech Connect

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  13. A Monte Carlo based lookup table for spectrum analysis of turbid media in the reflectance probe regime

    SciTech Connect

    Xiang Wen; Xiewei Zhong; Tingting Yu; Dan Zhu

    2014-07-31

    Fibre-optic diffuse reflectance spectroscopy offers a method for characterising phantoms of biotissue with specified optical properties. For a commercial reflectance probe (six source fibres surrounding a central collection fibre with an inter-fibre spacing of 480 μm; R400-7, Ocean Optics, USA) we have constructed a Monte Carlo based lookup table to create a function called getR(μ{sub a}, μ'{sub s}), where μ{sub a} is the absorption coefficient and μ'{sub s} is the reduced scattering coefficient. Experimental measurements of reflectance from homogeneous calibrated phantoms with given optical properties are compared with the predicted reflectance from the lookup table. The deviation between experiment and prediction is on average 12.1%. (laser biophotonics)

  14. Analysis of Quantum Monte Carlo Dynamics in Infinite-Range Ising Spin Systems:. Theory and its Possible Applications

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-Ichi

    2013-09-01

    In terms of the stochastic process of a quantum-mechanical variant of Markov chain Monte Carlo method based on the Suzuki-Trotter decomposition, we analytically derive deterministic flows of order parameters such as magnetization in infinite-range (a mean-field like) quantum spin systems. Under the static approximation, differential equations with respect to order parameters are explicitly obtained from the Master equation that describes the microscopic-law in the corresponding classical system. We discuss several possible applications of our approach to several research topics, say, image processing and neural networks. This paper is written as a self-review of two papers1,2 for Symposium on Interface between Quantum Information and Statistical Physics at Kinki University in Osaka, Japan.

  15. Periodic structures in the Franck-Hertz experiment with neon: Boltzmann equation and Monte-Carlo analysis

    NASA Astrophysics Data System (ADS)

    White, R. D.; Robson, R. E.; Nicoletopoulos, P.; Dujko, S.

    2012-05-01

    The Franck-Hertz experiment with neon gas is modelled as an idealised steady-state Townsend experiment and analysed theoretically using (a) multi-term solution of Boltzmann equation and (b) Monte-Carlo simulation. Theoretical electron periodic electron structures, together with the `window' of reduced fields in which they occur, are compared with experiment, and it is explained why it is necessary to account for all competing scattering processes in order to explain the observed experimental `wavelength'. The study highlights the fundamental flaws in trying to explain the observations in terms of a single, assumed dominant electronic excitation process, as is the case in text books and the myriad of misleading web sites.

  16. Performance analysis of short-range NLOS UV communication system using Monte Carlo simulation based on measured channel parameters.

    PubMed

    Luo, Pengfei; Zhang, Min; Han, Dahai; Li, Qing

    2012-10-01

    The research presented in this paper is a performance study of short-range NLOS ultraviolet (UV) communication system, using a Monte-Carlo-based system-level model, in which the channel parameters, such as the path loss and the background noise are experimentally measured using an outdoor UV communication test-bed. Various transceiver geometry and background noise condition are considered. Furthermore, 4 modulation schemes are compared, which provides an insight into the performance prediction and the system trade-offs among the path loss, the optical power, the distance, the link geometry, the bit rate and the bit error rate. Finally, advices are given on UV system design and performance improvement. PMID:23188312

  17. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  18. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  19. Facing Challenges for Monte Carlo Analysis of Full PWR Cores : Towards Optimal Detail Level for Coupled Neutronics and Proper Diffusion Data for Nodal Kinetics

    NASA Astrophysics Data System (ADS)

    Nuttin, A.; Capellan, N.; David, S.; Doligez, X.; El Mhari, C.; Méplan, O.

    2014-06-01

    Safety analysis of innovative reactor designs requires three dimensional modeling to ensure a sufficiently realistic description, starting from steady state. Actual Monte Carlo (MC) neutron transport codes are suitable candidates to simulate large complex geometries, with eventual innovative fuel. But if local values such as power densities over small regions are needed, reliable results get more difficult to obtain within an acceptable computation time. In this scope, NEA has proposed a performance test of full PWR core calculations based on Monte Carlo neutron transport, which we have used to define an optimal detail level for convergence of steady state coupled neutronics. Coupling between MCNP for neutronics and the subchannel code COBRA for thermal-hydraulics has been performed using the C++ tool MURE, developed for about ten years at LPSC and IPNO. In parallel with this study and within the same MURE framework, a simplified code of nodal kinetics based on two-group and few-point diffusion equations has been developed and validated on a typical CANDU LOCA. Methods for the computation of necessary diffusion data have been defined and applied to NU (Nat. U) and Th fuel CANDU after assembly evolutions by MURE. Simplicity of CANDU LOCA model has made possible a comparison of these two fuel behaviours during such a transient.

  20. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  1. Monte Carlo modeling of exospheric bodies - Mercury

    NASA Technical Reports Server (NTRS)

    Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.

    1978-01-01

    In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.

  2. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  3. Multilevel sequential Monte Carlo samplers

    DOE PAGESBeta

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  4. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  5. SCALE Monte Carlo Eigenvalue Methods and New Advancements

    SciTech Connect

    Goluoglu, Sedat; Leppanen, Jaakko; Petrie Jr, Lester M; Dunn, Michael E

    2010-01-01

    SCALE code system is developed and maintained by Oak Ridge National Laboratory to perform criticality safety, reactor analysis, radiation shielding, and spent fuel characterization for nuclear facilities and transportation/storage package designs. SCALE is a modular code system that includes several codes which use either Monte Carlo or discrete ordinates solution methodologies for solving relevant neutral particle transport equations. This paper describes some of the key capabilities of the Monte Carlo criticality safety codes within the SCALE code system.

  6. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  7. Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE

    SciTech Connect

    Bekar, Kursat B; Celik, Cihangir; Wiarda, Dorothea; Peplow, Douglas E.; Rearden, Bradley T; Dunn, Michael E

    2013-01-01

    Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.

  8. Final Report for Dynamic Models for Causal Analysis of Panel Data. The Impact of Measurement Error in the Analysis of Log-Linear Rate Models: Monte Carlo Findings. Part III, Chapter 4.

    ERIC Educational Resources Information Center

    Carroll, Glenn R.; And Others

    This document is part of a series of chapters described in SO 011 759. The chapter advocates the analysis of event-histories (data giving the number, timing, and sequence of changes in a categorical dependent variable) with maximum likelihood estimators (MLE) applied to log-linear rate models. Results from a Monte Carlo investigation of the impact…

  9. SPECIAL ISSUE DEVOTED TO MULTIPLE RADIATION SCATTERING IN RANDOM MEDIA: Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    NASA Astrophysics Data System (ADS)

    Bashkatov, A. N.; Genina, Elina A.; Kochubei, V. I.; Tuchin, Valerii V.

    2006-12-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates.

  10. Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study

    NASA Astrophysics Data System (ADS)

    Gear, J. I.; Charles-Edwards, E.; Partridge, M.; Flux, G. D.

    2011-11-01

    This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer

  11. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) in the recent CMB data

    SciTech Connect

    Kim, Jaiseung

    2011-04-01

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.

  12. Performance of an ARC-enabled computing grid for ATLAS/LHC physics analysis and Monte Carlo production under realistic conditions

    NASA Astrophysics Data System (ADS)

    Samset, B. H.; Cameron, D.; Ellert, M.; Filipcic, A.; Gronager, M.; Kleist, J.; Maffioletti, S.; Ould-Saada, F.; Pajchel, K.; Read, A. L.; Taga, A.; ATLAS Collaboration

    2010-04-01

    A significant amount of the computing resources available to the ATLAS experiment at the LHC are connected via the ARC grid middleware. ATLAS ARC-enabled resources, which consist of both major computing centers at the Tier-1 level and lesser, local clusters at Tier-2 and 3 level, have shown excellent performance running heavy Monte Carlo (MC) production for the experiment. However, with the imminent arrival of LHC physics data, it is imperative that the deployed grid middlewares also can handle data access patterns caused by user-defined physics analysis. These user-defined jobs can have radically different demands than systematic, centrally controlled MC production. We report on the performance of the ARC middleware, as deployed for ATLAS, for realistic situations with concurrent MC production and physics analysis running on the same resources. Data access patterns for ATLAS MC and physics analysis grid jobs will be shown, together with the performance of various possible storage and file staging models.

  13. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  14. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms. PMID:17033783

  15. Monte Carlo model for the analysis and development of III-V Tunnel-FETs and Impact Ionization-MOSFETs

    NASA Astrophysics Data System (ADS)

    Talbo, V.; Mateos, J.; González, T.; Lechaux, Y.; Wichmann, N.; Bollaert, S.; Vasallo, B. G.

    2015-10-01

    Impact-ionization metal-oxide-semiconductor FETs (I-MOSFETs) are in competition with tunnel FETs (TFETs) in order to achieve the best behaviour for low power logic circuits. Concretely, III-V I-MOSFETs are being explored as promising devices due to the proper reliability, since the impact ionization events happen away from the gate oxide, and the high cutoff frequency, due to high electron mobility. To facilitate the design process from the physical point of view, a Monte Carlo (MC) model which includes both impact ionization and band-to-band tunnel is presented. Two ungated InGaAs and InAlAs/InGaAs 100 nm PIN diodes have been simulated. In both devices, the tunnel processes are more frequent than impact ionizations, so that they are found to be appropriate for TFET structures and not for I- MOSFETs. According to our simulations, other narrow bandgap candidates for the III-V heterostructure, such as InAs or GaSb, and/or PININ structures must be considered for a correct I-MOSFET design.

  16. DS86 neutron dose: Monte Carlo analysis for depth profile of 152Eu activity in a large stone sample.

    PubMed

    Endo, S; Iwatani, K; Oka, T; Hoshi, M; Shizuma, K; Imanaka, T; Takada, J; Fujita, S; Hasai, H

    1999-06-01

    The depth profile of 152Eu activity induced in a large granite stone pillar by Hiroshima atomic bomb neutrons was calculated by a Monte Carlo N-Particle Transport Code (MCNP). The pillar was on the Motoyasu Bridge, located at a distance of 132 m (WSW) from the hypocenter. It was a square column with a horizontal sectional size of 82.5 cm x 82.5 cm and height of 179 cm. Twenty-one cells from the north to south surface at the central height of the column were specified for the calculation and 152Eu activities for each cell were calculated. The incident neutron spectrum was assumed to be the angular fluence data of the Dosimetry System 1986 (DS86). The angular dependence of the spectrum was taken into account by dividing the whole solid angle into twenty-six directions. The calculated depth profile of specific activity did not agree with the measured profile. A discrepancy was found in the absolute values at each depth with a mean multiplication factor of 0.58 and also in the shape of the relative profile. The results indicated that a reassessment of the neutron energy spectrum in DS86 is required for correct dose estimation. PMID:10494148

  17. Neutron streaming through a slit and duct in concrete shields and comparison with a Monte Carlo analysis

    SciTech Connect

    Hiroyuki, H.; Hideshi, F.; Masatsugu, A.; Shigehiro, A.; Yoshiaki, O.

    1983-08-01

    A series of measurements of about14-MeV deuterium-tritium neutrons streaming through a slit and a duct in concrete shields has been carried out using a Cockcroft-Walton-type neutron generator. Measured neutron energy spectra are compared with calculations in six configurations of the shields. The configurations are the simplified geometries of streaming paths of tokamak reactors, such as a divertor throat and a neutral beam injection port. The measured data were obtained with an NE-213 liquid scintillator using pulse shape discrimination methods to resolve neutron and gamma-ray pulse height data and using a spectral unfolding code to convert these data to energy spectra. The experiments were analyzed by a Monte Carlo code. The calculated neutron energy spectra slightly underestimate the measured data, especially in the range of 6 to 8 MeV. The agreement between the calculated and measured integral flux above 2.2 MeV ranges from 87.5 to 72.% depending on the configurations.

  18. A Monte Carlo Analysis of Weight Data from UF6 Cylinder Feed and Withdrawal Stations

    SciTech Connect

    Garner, James R; Whitaker, J Michael

    2015-01-01

    As the number of nuclear facilities handling uranium hexafluoride (UF6) cylinders (e.g., UF6 production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge National Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.

  19. Understanding the lateral dose response functions of high-resolution photon detectors by reverse Monte Carlo and deconvolution analysis.

    PubMed

    Looe, Hui Khee; Harder, Dietrich; Poppe, Björn

    2015-08-21

    The purpose of the present study is to understand the mechanism underlying the perturbation of the field of the secondary electrons, which occurs in the presence of a detector in water as the surrounding medium. By means of 'reverse' Monte Carlo simulation, the points of origin of the secondary electrons contributing to the detector's signal are identified and associated with the detector's mass density, electron density and atomic composition. The spatial pattern of the origin of these secondary electrons, in addition to the formation of the detector signal by components from all parts of its sensitive volume, determines the shape of the lateral dose response function, i.e. of the convolution kernel K(x,y) linking the lateral profile of the absorbed dose in the undisturbed surrounding medium with the associated profile of the detector's signal. The shape of the convolution kernel is shown to vary essentially with the electron density of the detector's material, and to be attributable to the relative contribution by the signal-generating secondary electrons originating within the detector's volume to the total detector signal. Finally, the representation of the over- or underresponse of a photon detector by this density-dependent convolution kernel will be applied to provide a new analytical expression for the associated volume effect correction factor. PMID:26267311

  20. Modelling cerebral blood oxygenation using Monte Carlo XYZ-PA

    NASA Astrophysics Data System (ADS)

    Zam, Azhar; Jacques, Steven L.; Alexandrov, Sergey; Li, Youzhi; Leahy, Martin J.

    2013-02-01

    Continuous monitoring of cerebral blood oxygenation is critically important for the management of many lifethreatening conditions. Non-invasive monitoring of cerebral blood oxygenation with a photoacoustic technique offers advantages over current invasive and non-invasive methods. We introduce a Monte Carlo XYZ-PA to model the energy deposition in 3D and the time-resolved pressures and velocity potential based on the energy absorbed by the biological tissue. This paper outlines the benefits of using Monte Carlo XYZ-PA for optimization of photoacoustic measurement and imaging. To the best of our knowledge this is the first fully integrated tool for photoacoustic modelling.

  1. A Monte Carlo Sensitivity Analysis of CF2 and CF Radical Densities in a c-C4F8 Plasma

    NASA Technical Reports Server (NTRS)

    Bose, Deepak; Rauf, Shahid; Hash, D. B.; Govindan, T. R.; Meyyappan, M.

    2004-01-01

    A Monte Carlo sensitivity analysis is used to build a plasma chemistry model for octacyclofluorobutane (c-C4F8) which is commonly used in dielectric etch. Experimental data are used both quantitatively and quantitatively to analyze the gas phase and gas surface reactions for neutral radical chemistry. The sensitivity data of the resulting model identifies a few critical gas phase and surface aided reactions that account for most of the uncertainty in the CF2 and CF radical densities. Electron impact dissociation of small radicals (CF2 and CF) and their surface recombination reactions are found to be the rate-limiting steps in the neutral radical chemistry. The relative rates for these electron impact dissociation and surface recombination reactions are also suggested. The resulting mechanism is able to explain the measurements of CF2 and CF densities available in the literature and also their hollow spatial density profiles.

  2. STS-1 operational flight profile. Volume 5: Descent cycle 3. Appendix D: GRTLS six degree of freedom Monte Carlo dispersion analysis

    NASA Technical Reports Server (NTRS)

    Montez, M. N.

    1980-01-01

    The results of a six degree of freedom (6-DOF) nonlinear Monte Carlo dispersion analysis for the latest glide return to landing site (GRTLS) abort trajectory for the Space Transportation System 1 Flight are presented. For this GRTLS, the number two main engine fails at 262.5 seconds ground elapsed time. Fifty randomly selected simulations, initialized at external tank separation, are analyzed. The initial covariance matrix is a 20 x 20 matrix and includes navigation errors and dispersions in position and velocity, time, accelerometer bias, and inertial platform misalinements. In all 50 samples, speedbrake, rudder, elevon, and body flap hinge moments are acceptable. Transitions to autoland begin before 9,000 feet and there are no tailscrapes. Navigation derived dynamic pressure accuracies exceed the flight control system constraints above Mach 2.5. Three out of 50 landings exceeded tire specification limit speed of 222 knots. Pilot manual landings are expected to reduce landing speed by landing farther downrange.

  3. Three dimensional Monte-Carlo modeling of laser-tissue interaction

    SciTech Connect

    Gentile, N A; Kim, B M; London, R A; Trauner, K B

    1999-03-12

    A full three dimensional Monte-Carlo program was developed for analysis of the laser-tissue interactions. This project was performed as a part of the LATIS3D (3-D Laser-Tissue interaction) project. The accuracy was verified against results from a public domain two dimensional axisymmetric program. The code was used for simulation of light transport in simplified human knee geometry. Using the real human knee meshes which will be extracted from MRI images in the near future, a full analysis of dosimetry and surgical strategies for photodynamic therapy of rheumatoid arthritis will be followed.

  4. Monte Carlo simulations of kagome lattices with magnetic dipolar interactions

    NASA Astrophysics Data System (ADS)

    Plumer, Martin; Holden, Mark; Way, Andrew; Saika-Voivod, Ivan; Southern, Byron

    Monte Carlo simulations of classical spins on the two-dimensional kagome lattice with only dipolar interactions are presented. In addition to revealing the sixfold-degenerate ground state, the nature of the finite-temperature phase transition to long-range magnetic order is discussed. Low-temperature states consisting of mixtures of degenerate ground-state configurations separated by domain walls can be explained as a result of competing exchange-like and shape-anisotropy-like terms in the dipolar coupling. Fluctuations between pairs of degenerate spin configurations are found to persist well into the ordered state as the temperature is lowered until locking in to a low-energy state. Results suggest that the system undergoes a continuous phase transition at T ~ 0 . 43 in agreement with previous MC simulations but the nature of the ordering process differs. Preliminary results which extend this analysis to the 3D fcc ABC-stacked kagome systems will be presented.

  5. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  6. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  7. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    PubMed Central

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  8. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    PubMed

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  9. Converging Stereotactic Radiotherapy Using Kilovoltage X-Rays: Experimental Irradiation of Normal Rabbit Lung and Dose-Volume Analysis With Monte Carlo Simulation

    SciTech Connect

    Kawase, Takatsugu; Kunieda, Etsuo Deloar, Hossain M.; Tsunoo, Takanori; Seki, Satoshi; Oku, Yohei; Saitoh, Hidetoshi; Saito, Kimiaki; Ogawa, Eileen N.; Ishizaka, Akitoshi; Kameyama, Kaori; Kubo, Atsushi

    2009-10-01

    Purpose: To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. Methods and Materials: A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. Results: A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. Conclusions: A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.

  10. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  11. Extending canonical Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  12. Compressible generalized hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Fang, Youhan; Sanz-Serna, J. M.; Skeel, Robert D.

    2014-05-01

    One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics.

  13. Energy dispersive X-ray fluorescence spectroscopy/Monte Carlo simulation approach for the non-destructive analysis of corrosion patina-bearing alloys in archaeological bronzes: The case of the bowl from the Fareleira 3 site (Vidigueira, South Portugal)

    NASA Astrophysics Data System (ADS)

    Bottaini, C.; Mirão, J.; Figuereido, M.; Candeias, A.; Brunetti, A.; Schiavon, N.

    2015-01-01

    Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample.

  14. Exploring Spatiotemporal Patterns of Holocene Carbon Dynamics in Northern Peatlands by Incorporating Bayesian Age-Depth Modeling into Monte-Carlo EOF Analysis

    NASA Astrophysics Data System (ADS)

    Massa, Charly; Yu, Zicheng; Blaauw, Maarten; Loisel, Julie

    2014-05-01

    EOF (Empirical Orthogonal Functions) analysis is a common tool for exploring the spatiotemporal modes of instrumental climate data. Although rarely applied to paleo proxy records, the EOF method is an efficient tool for the detection and analysis of broad-scale patterns of centennial to millennial-scale climate variability. But most paleoclimate records are not annually resolved and have inherent chronological uncertainties that may be problematic using ordinary EOF. Anchukaitis et al. (2012) provided a major step forward in paleo proxy data analysis by adapting EOF to time-uncertain paleoclimate proxy records (Monte-Carlo EOF). However, additional problems may arise for analyzing flux-based paleo parameters, such as peat C accumulation rates, which are strongly dependent to age-depth modeling, that is, small uncertainties in ages may lead to large differences in accumulation rates. Here we present a new approach that combines Bayesian age modeling and Monte-Carlo EOF to analyze flux-based paleo-datasets by thoroughly addressing both chronological and flux measurement uncertainties. This approach, implemented as a suit of linked R functions, overcomes a number of technical challenges, such as the effective handling of large datasets, the reduction of computational requirements for calculating hundreds of thousands of iterations, standardization issues, and EOF computation of gappy data. As a case study we explored the recently published Holocene circum-Arctic peatland database with >100 sites (Loisel et al. in press) to investigate the spatiotemporal patterns and climate controls of peat C accumulation. The approach can be used for other flux-based proxies, such as charcoal influx, erosion rates or atmospheric depositions. Our preliminary results reveal different temporal patterns of C accumulation in major peatland regions, as controlled by various regional climate histories and other bioclimatic factors. For instance, peatlands in continental vs. oceanic settings

  15. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

    NASA Astrophysics Data System (ADS)

    Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

    2014-06-01

    The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear

  16. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  17. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  18. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  19. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    SciTech Connect

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  20. Development of a Space Radiation Monte Carlo Computer Simulation

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence S.

    1997-01-01

    The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.

  1. Four decades of implicit Monte Carlo

    DOE PAGESBeta

    Wollaber, Allan B.

    2016-04-25

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  2. Using Monte-Carlo approach for analysis of quantitative and qualitative operation of reservoirs system with regard to the inflow uncertainty

    NASA Astrophysics Data System (ADS)

    Motevalli, Mostafa; Zadbar, Ali; Elyasi, Elham; Jalaal, Maziar

    2015-05-01

    Operation of dams' reservoir systems, as one of the main sources of our country's surface water, has a particular importance. Since the operational hydrological and meteorological parameters of water budget in reservoir systems' operation are indefinite, in order to choose a comprehensive and optimal policy for the operation analysis of these systems, water inflow is considered as the most important hydrological parameter in an uncertain reservoir system. Monte-Carlo approach was applied to study the water inflow impact on the performance of both single and multi-reservoir systems. Doing so, artificial statistics for monthly inflow time series of each production reservoir system and the probable distributions of time, quantitative reliability, vulnerability, and resiliency standards were analyzed in five different simulation and optimization models as the system's efficiency criteria. The reason for choosing Karun 3, Karun 4, and Khersan 1 dams was the need for three dams to be setup as reservoir systems in both serial and parallel forms. The results of the operation criteria analysis indicated that for the operation of the whole system, the best quantitative reliability, vulnerability, and resiliency values were in the optimized single-reservoir model, and the best time reliability value was in the optimized multi-reservoir model. Moreover, the inflow uncertainty had the minimum impact on the quantitative reliability criteria and the maximum impact on the resiliency criteria.

  3. Particle in cell/Monte Carlo collision analysis of the problem of identification of impurities in the gas by the plasma electron spectroscopy method

    NASA Astrophysics Data System (ADS)

    Kusoglu Sarikaya, C.; Rafatov, I.; Kudryavtsev, A. A.

    2016-06-01

    The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the code is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.

  4. Uncertainty of modelled urban peak O3 concentrations and its sensitivity to input data perturbations based on the Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.

    2016-09-01

    A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.

  5. The effects of LIGO detector noise on a 15-dimensional Markov-chain Monte Carlo analysis of gravitational-wave signals

    NASA Astrophysics Data System (ADS)

    Raymond, V.; van der Sluys, M. V.; Mandel, I.; Kalogera, V.; Röver, C.; Christensen, N.

    2010-06-01

    Gravitational-wave signals from inspirals of binary compact objects (black holes and neutron stars) are primary targets of the ongoing searches by ground-based gravitational-wave (GW) interferometers (LIGO, Virgo and GEO-600). We present parameter estimation results from our Markov-chain Monte Carlo code SPINspiral on signals from binaries with precessing spins. Two data sets are created by injecting simulated GW signals either into synthetic Gaussian noise or into LIGO detector data. We compute the 15-dimensional probability-density functions (PDFs) for both data sets, as well as for a data set containing LIGO data with a known, loud artefact ('glitch'). We show that the analysis of the signal in detector noise yields accuracies similar to those obtained using simulated Gaussian noise. We also find that while the Markov chains from the glitch do not converge, the PDFs would look consistent with a GW signal present in the data. While our parameter estimation results are encouraging, further investigations into how to differentiate an actual GW signal from noise are necessary.

  6. A Monte Carlo Analysis of the Thrust Imbalance for the RSRMV Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle

  7. Application of a XMM-Newton EPIC Monte Carlo to Analysis And Interpretation of Data for Abell 1689, RXJ0658-55 And the Centaurus Clusters of Galaxies

    SciTech Connect

    Andersson, Karl E.; Peterson, J.R.; Madejski, G.M.; /SLAC /KIPAC, Menlo Park

    2007-04-17

    We propose a new Monte Carlo method to study extended X-ray sources with the European Photon Imaging Camera (EPIC) aboard XMM Newton. The Smoothed Particle Inference (SPI) technique, described in a companion paper, is applied here to the EPIC data for the clusters of galaxies Abell 1689, Centaurus and RXJ 0658-55 (the ''bullet cluster''). We aim to show the advantages of this method of simultaneous spectral-spatial modeling over traditional X-ray spectral analysis. In Abell 1689 we confirm our earlier findings about structure in temperature distribution and produce a high resolution temperature map. We also confirm our findings about velocity structure within the gas. In the bullet cluster, RXJ 0658-55, we produce the highest resolution temperature map ever to be published of this cluster allowing us to trace what looks like the motion of the bullet in the cluster. We even detect a south to north temperature gradient within the bullet itself. In the Centaurus cluster we detect, by dividing up the luminosity of the cluster in bands of gas temperatures, a striking feature to the north-east of the cluster core. We hypothesize that this feature is caused by a subcluster left over from a substantial merger that slightly displaced the core. We conclude that our method is very powerful in determining the spatial distributions of plasma temperatures and very useful for systematic studies in cluster structure.

  8. Diffusion of oxygen interstitials in UO2+x using kinetic Monte Carlo simulations: Role of O/M ratio and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Behera, Rakesh K.; Watanabe, Taku; Andersson, David A.; Uberuaga, Blas P.; Deo, Chaitanya S.

    2016-04-01

    Oxygen interstitials in UO2+x significantly affect the thermophysical properties and microstructural evolution of the oxide nuclear fuel. In hyperstoichiometric Urania (UO2+x), these oxygen interstitials form different types of defect clusters, which have different migration behavior. In this study we have used kinetic Monte Carlo (kMC) to evaluate diffusivities of oxygen interstitials accounting for mono- and di-interstitial clusters. Our results indicate that the predicted diffusivities increase significantly at higher non-stoichiometry (x > 0.01) for di-interstitial clusters compared to a mono-interstitial only model. The diffusivities calculated at higher temperatures compare better with experimental values than at lower temperatures (< 973 K). We have discussed the resulting activation energies achieved for diffusion with all the mono- and di-interstitial models. We have carefully performed sensitivity analysis to estimate the effect of input di-interstitial binding energies on the predicted diffusivities and activation energies. While this article only discusses mono- and di-interstitials in evaluating oxygen diffusion response in UO2+x, future improvements to the model will primarily focus on including energetic definitions of larger stable interstitial clusters reported in the literature. The addition of larger clusters to the kMC model is expected to improve the comparison of oxygen transport in UO2+x with experiment.

  9. Analysis of Intervention Strategies for Inhalation Exposure to Polycyclic Aromatic Hydrocarbons and Associated Lung Cancer Risk Based on a Monte Carlo Population Exposure Assessment Model

    PubMed Central

    Zhou, Bin; Zhao, Bin

    2014-01-01

    It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making. PMID:24416436

  10. Analysis of the influence of germanium dead layer on detector calibration simulation for environmental radioactive samples using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ródenas, J.; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L.

    2003-01-01

    Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.

  11. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  12. Research on GPU Acceleration for Monte Carlo Criticality Calculation

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Yu, Ganglin; Wang, Kan

    2014-06-01

    The Monte Carlo neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The "neutron transport step" is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the "neutron transport step" strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs.

  13. A Monte Carlo Study of Six Models of Change.

    ERIC Educational Resources Information Center

    Corder-Bolz, Charles R.

    A Monte Carlo Study was conducted to evaluate six models commonly used to evaluate change. The results revealed specific problems with each. Analysis of covariance and analysis of variance of residualized gain scores appeared to substantially and consistently overestimate the change effects. Multiple factor analysis of variance models utilizing…

  14. Monte Carlo Form-Finding Method for Tensegrity Structures

    NASA Astrophysics Data System (ADS)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  15. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  16. X-ray imaging plate performance investigation based on a Monte Carlo simulation tool

    NASA Astrophysics Data System (ADS)

    Yao, M.; Duvauchelle, Ph.; Kaftandjian, V.; Peterzol-Parmentier, A.; Schumm, A.

    2015-01-01

    Computed radiography (CR) based on imaging plate (IP) technology represents a potential replacement technique for traditional film-based industrial radiography. For investigating the IP performance especially at high energies, a Monte Carlo simulation tool based on PENELOPE has been developed. This tool tracks separately direct and secondary radiations, and monitors the behavior of different particles. The simulation output provides 3D distribution of deposited energy in IP and evaluation of radiation spectrum propagation allowing us to visualize the behavior of different particles and the influence of different elements. A detailed analysis, on the spectral and spatial responses of IP at different energies up to MeV, has been performed.

  17. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86

  18. Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.

    PubMed

    Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L

    2003-02-01

    Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310

  19. A Monte Carlo Investigation of Conjoint Analysis Index-of-Fit: Goodness of Fit, Significance and Power.

    ERIC Educational Resources Information Center

    Umesh, U. N.; Mishra, Sanjay

    1990-01-01

    Major issues related to index-of-fit conjoint analysis were addressed in this simulation study. Goals were to develop goodness-of-fit criteria for conjoint analysis; develop tests to determine the significance of conjoint analysis results; and calculate the power of the test of the null hypothesis of random data distribution. (SLD)

  20. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  1. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    SciTech Connect

    West, J.T.

    1985-01-01

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  2. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  3. Ordinal Hypothesis in ANOVA Designs: A Monte Carlo Study.

    ERIC Educational Resources Information Center

    Braver, Sanford L.; Sheets, Virgil L.

    Numerous designs using analysis of variance (ANOVA) to test ordinal hypotheses were assessed using a Monte Carlo simulation. Each statistic was computed on each of over 10,000 random samples drawn from a variety of population conditions. The number of groups, population variance, and patterns of population means were varied. In the non-null…

  4. Coherent scatter imaging Monte Carlo simulation.

    PubMed

    Hassan, Laila; MacDonald, Carolyn A

    2016-07-01

    Conventional mammography can suffer from poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter slot scan imaging is an imaging technique which provides additional information and is compatible with conventional mammography. A Monte Carlo simulation of coherent scatter slot scan imaging was performed to assess its performance and provide system optimization. Coherent scatter could be exploited using a system similar to conventional slot scan mammography system with antiscatter grids tilted at the characteristic angle of cancerous tissues. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The simulated carcinomas were detectable for tumors as small as 5 mm in diameter, so coherent scatter analysis using a wide-slot setup could be promising as an enhancement for screening mammography. Employing coherent scatter information simultaneously with conventional mammography could yield a conventional high spatial resolution image with additional coherent scatter information. PMID:27610397

  5. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  6. MORSE Monte Carlo radiation transport code system

    SciTech Connect

    Emmett, M.B.

    1983-02-01

    This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)

  7. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus).

    PubMed

    Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn

    2015-01-01

    Phylogenomic analysis of highly-resolved intraspecific phylogenies obtained from complete mitochondrial DNA genomes has had great success in clarifying relationships within and among human populations, but has found limited application in other wild species. Analytical challenges include assessment of random versus non-random phylogeographic distributions, and quantification of differences in tree topologies among populations. Harp Seals (Pagophilus groenlandicus Erxleben, 1777) have a biogeographic distribution based on four discrete trans-Atlantic breeding and whelping populations located on "fast ice" attached to land in the White Sea, Greenland Sea, the Labrador ice Front, and Southern Gulf of St Lawrence. This East to West distribution provides a set of a priori phylogeographic hypotheses. Outstanding biogeographic questions include the degree of genetic distinctiveness among these populations, in particular between the Greenland Sea and White Sea grounds. We obtained complete coding-region DNA sequences (15,825 bp) for 53 seals. Each seal has a unique mtDNA genome sequence, which differ by 6 ~ 107 substitutions. Six major clades / groups are detectable by parsimony, neighbor-joining, and Bayesian methods, all of which are found in breeding populations on either side of the Atlantic. The species coalescent is at 180 KYA; the most recent clade, which accounts for 66% of the diversity, reflects an expansion during the mid-Wisconsinan glaciation 40~60 KYA. FST is significant only between the White Sea and Greenland Sea or Ice Front populations. Hierarchal AMOVA of 2-, 3-, or 4-island models identifies small but significant ΦSC among populations within groups, but not among groups. A novel Monte-Carlo simulation indicates that the observed distribution of individuals within breeding populations over the phylogenetic tree requires significantly fewer dispersal events than random expectation, consistent with island or a priori East to West 2- or 3-stepping

  8. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus)

    PubMed Central

    Carr, Steven M.; Duggan, Ana T.; Stenson, Garry B.; Marshall, H. Dawn

    2015-01-01

    Phylogenomic analysis of highly-resolved intraspecific phylogenies obtained from complete mitochondrial DNA genomes has had great success in clarifying relationships within and among human populations, but has found limited application in other wild species. Analytical challenges include assessment of random versus non-random phylogeographic distributions, and quantification of differences in tree topologies among populations. Harp Seals (Pagophilus groenlandicus Erxleben, 1777) have a biogeographic distribution based on four discrete trans-Atlantic breeding and whelping populations located on “fast ice” attached to land in the White Sea, Greenland Sea, the Labrador ice Front, and Southern Gulf of St Lawrence. This East to West distribution provides a set of a priori phylogeographic hypotheses. Outstanding biogeographic questions include the degree of genetic distinctiveness among these populations, in particular between the Greenland Sea and White Sea grounds. We obtained complete coding-region DNA sequences (15,825 bp) for 53 seals. Each seal has a unique mtDNA genome sequence, which differ by 6 ~ 107 substitutions. Six major clades / groups are detectable by parsimony, neighbor-joining, and Bayesian methods, all of which are found in breeding populations on either side of the Atlantic. The species coalescent is at 180 KYA; the most recent clade, which accounts for 66% of the diversity, reflects an expansion during the mid-Wisconsinan glaciation 40 ~ 60 KYA. FST is significant only between the White Sea and Greenland Sea or Ice Front populations. Hierarchal AMOVA of 2-, 3-, or 4-island models identifies small but significant ΦSC among populations within groups, but not among groups. A novel Monte-Carlo simulation indicates that the observed distribution of individuals within breeding populations over the phylogenetic tree requires significantly fewer dispersal events than random expectation, consistent with island or a priori East to West 2- or 3-stepping

  9. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  10. Monte Carlo capabilities of the SCALE code system

    DOE PAGESBeta

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; et al

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  11. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  12. SU-E-J-09: A Monte Carlo Analysis of the Relationship Between Cherenkov Light Emission and Dose for Electrons, Protons, and X-Ray Photons

    SciTech Connect

    Glaser, A; Zhang, R; Gladstone, D; Pogue, B

    2014-06-01

    Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.

  13. Analysis of the orientational order effect on n-alkanes: Evidences on experimental response functions and description using Monte Carlo molecular simulation.

    PubMed

    Bessières, D; Piñeiro, M M; De Ferron, G; Plantier, F

    2010-08-21

    Short-range correlations of the molecular orientations in liquid n-alkanes have been extensively studied from depolarized Rayleigh scattering and thermodynamic measurements. These correlations between segments induce structural anisotropy in the fluid bulk. This phenomenon, which is characteristic of linear chain molecules when the constituting segments are nor freely jointed, but interact through a given angular potential, is then present in the linear n-Cn series, increasing its magnitude with chain length, and it is therefore less relevant or even completely absent in branched alkanes. This intermolecular effect is clearly revealed in second-order excess magnitudes such as heat capacities when the linear molecule is mixed with one whose structure approaches sphericity. The mixing process of different aspect ratio chain molecules is thought to modify the original pure fluid structure, by producing a diminution of the orientational order previously existing between pure n-alkane chains. However, second-order thermodynamics quantities of pure liquids C(P), ( partial differentialv/ partial differentialT)(P), and ( partial differentialv/ partial differentialP)(P) are known to be very sensitive to the specific interactions occurring at the microscopic level. In other words, the behavior of these derived properties versus temperature and pressure can be regarded as response functions of the complexity of the microscopic interactions. Thus, the purpose of the present work is to rationalize the orientational order evolution with both temperature and molecular chain length from the analysis of pure fluid properties. To this aim, we focused on two linear alkanes, n-octane (n-C(8)) and n-hexadecane (n-C(16)), and two of their branched isomers, i.e., 2,2,4-trimethylpentane (br-C(8)) and 2,2,4,4,6,8,8-heptamethylnonane (br-C(16)). For each compound, we propose a combined study from direct experimental determination of second-order derivative properties and Monte Carlo

  14. Monte Carlo model for analysis of thermal runaway electrons in streamer tips in transient luminous events and streamer zones of lightning leaders

    NASA Astrophysics Data System (ADS)

    Moss, Gregory D.; Pasko, Victor P.; Liu, Ningyu; Veronis, Georgios

    2006-02-01

    Streamers are thin filamentary plasmas that can initiate spark discharges in relatively short (several centimeters) gaps at near ground pressures and are also known to act as the building blocks of streamer zones of lightning leaders. These streamers at ground pressure, after 1/N scaling with atmospheric air density N, appear to be fully analogous to those documented using telescopic imagers in transient luminous events (TLEs) termed sprites, which occur in the altitude range 40-90 km in the Earth's atmosphere above thunderstorms. It is also believed that the filamentary plasma structures observed in some other types of TLEs, which emanate from the tops of thunderclouds and are termed blue jets and gigantic jets, are directly linked to the processes in streamer zones of lightning leaders. Acceleration, expansion, and branching of streamers are commonly observed for a wide range of applied electric fields. Recent analysis of photoionization effects on the propagation of streamers indicates that very high electric field magnitudes ˜10 Ek, where Ek is the conventional breakdown threshold field defined by the equality of the ionization and dissociative attachment coefficients in air, are generated around the tips of streamers at the stage immediately preceding their branching. This paper describes the formulation of a Monte Carlo model, which is capable of describing electron dynamics in air, including the thermal runaway phenomena, under the influence of an external electric field of an arbitrary strength. Monte Carlo modeling results indicate that the ˜10 Ek fields are able to accelerate a fraction of low-energy (several eV) streamer tip electrons to energies of ˜2-8 keV. With total potential differences on the order of tens of MV available in streamer zones of lightning leaders, it is proposed that during a highly transient negative corona flash stage of the development of negative stepped leader, electrons with energies 2-8 keV ejected from streamer tips near

  15. Deciding on the Number of Classes in Latent Class Analysis and Growth Mixture Modeling: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O.

    2007-01-01

    Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…

  16. Performance of Empirical Bayes Estimators of Level-2 Random Parameters in Multilevel Analysis: A Monte Carlo Study for Longitudinal Designs

    ERIC Educational Resources Information Center

    Candel, Math J. J. M.; Winkens, Bjorn

    2003-01-01

    Multilevel analysis is a useful technique for analyzing longitudinal data. To describe a person's development across time, the quality of the estimates of the random coefficients, which relate time to individual changes in a relevant dependent variable, is of importance. The present study compares three estimators of the random coefficients: the…

  17. A standard timing benchmark for EGS4 Monte Carlo calculations.

    PubMed

    Bielajew, A F; Rogers, D W

    1992-01-01

    A Fortran 77 Monte Carlo source code built from the EGS4 Monte Carlo code system has been used for timing benchmark purposes on 29 different computers. This code simulates the deposition of energy from an incident electron beam in a 3-D rectilinear geometry such as one would employ to model electron and photon transport through a series of CT slices. The benchmark forms a standalone system and does not require that the EGS4 system be installed. The Fortran source code may be ported to different architectures by modifying a few lines and only a moderate amount of CPU time is required ranging from about 5 h on PC/386/387 to a few seconds on a massively parallel supercomputer (a BBN TC2000 with 512 processors). PMID:1584121

  18. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  19. Uncertainty propagation in a stratospheric model. I - Development of a concise stratospheric model. II - Monte Carlo analysis of imprecisions due to reaction rates. [for ozone depletion prediction

    NASA Technical Reports Server (NTRS)

    Rundel, R. D.; Butler, D. M.; Stolarski, R. S.

    1978-01-01

    The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.

  20. Analysis of the radiation shielding of the bunker of a 230MeV proton cyclotron therapy facility; comparison of analytical and Monte Carlo techniques.

    PubMed

    Sunil, C

    2016-04-01

    The neutron ambient dose equivalent outside the radiation shield of a proton therapy cyclotron vault is estimated using the unshielded dose equivalent rates and the attenuation lengths obtained from the literature and by simulations carried out with the FLUKA Monte Carlo radiation transport code. The source terms derived from the literature and that obtained from the FLUKA calculations differ by a factor of 2-3, while the attenuation lengths obtained from the literature differ by 20-40%. The instantaneous dose equivalent rates outside the shield differ by a few orders of magnitude, not only in comparison with the Monte Carlo simulation results, but also with the results obtained by line of sight attenuation calculations with the different parameters obtained from the literature. The attenuation of neutrons caused by the presence of bulk iron, such as magnet yokes is expected to reduce the dose equivalent by as much as a couple of orders of magnitude outside the shield walls. PMID:26844542

  1. Analysis of dpa Rates in the HFIR Reactor Vessel using a Hybrid Monte Carlo/Deterministic Method

    NASA Astrophysics Data System (ADS)

    Risner, J. M.; Blakeman, E. D.

    2016-02-01

    The Oak Ridge High Flux Isotope Reactor (HFIR), which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa), particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiation transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 in. below to approximately 12 in. above the height of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%. Notice: This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the US Department of Energy. The US Government retains and the publisher, by accepting the article for publication, acknowledges that the US Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for the US Government purposes.

  2. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  3. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  4. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  5. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  6. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  7. Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    PubMed Central

    Siswantoro, Joko; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069

  8. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  9. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  10. Dairy goat kids fed liquid diets in substitution of goat milk and slaughtered at different ages: an economic viability analysis using Monte Carlo techniques.

    PubMed

    Knupp, L S; Veloso, C M; Marcondes, M I; Silveira, T S; Silva, A L; Souza, N O; Knupp, S N R; Cannas, A

    2016-03-01

    The aim of this study was to analyze the economic viability of producing dairy goat kids fed liquid diets in alternative of goat milk and slaughtered at two different ages. Forty-eight male newborn Saanen and Alpine kids were selected and allocated to four groups using a completely randomized factorial design: goat milk (GM), cow milk (CM), commercial milk replacer (CMR) and fermented cow colostrum (FC). Each group was then divided into two groups: slaughter at 60 and 90 days of age. The animals received Tifton hay and concentrate ad libitum. The values of total costs of liquid and solid feed plus labor, income and average gross margin were calculated. The data were then analyzed using the Monte Carlo techniques with the @Risk 5.5 software, with 1000 iterations of the variables being studied through the model. The kids fed GM and CMR generated negative profitability values when slaughtered at 60 days (US$ -16.4 and US$ -2.17, respectively) and also at 90 days (US$ -30.8 and US$ -0.18, respectively). The risk analysis showed that there is a 98% probability that profitability would be negative when GM is used. In this regard, CM and FC presented low risk when the kids were slaughtered at 60 days (8.5% and 21.2%, respectively) and an even lower risk when animals were slaughtered at 90 days (5.2% and 3.8%, respectively). The kids fed CM and slaughtered at 90 days presented the highest average gross income (US$ 67.88) and also average gross margin (US$ 18.43/animal). For the 60-day rearing regime to be economically viable, the CMR cost should not exceed 11.47% of the animal-selling price. This implies that the replacer cannot cost more than US$ 0.39 and 0.43/kg for the 60- and 90-day feeding regimes, respectively. The sensitivity analysis showed that the variables with the greatest impact on the final model's results were animal selling price, liquid diet cost, final weight at slaughter and labor. In conclusion, the production of male dairy goat kids can be economically

  11. A new method for commissioning Monte Carlo treatment planning systems

    NASA Astrophysics Data System (ADS)

    Aljarrah, Khaled Mohammed

    2005-11-01

    The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.

  12. Development of a Monte Carlo code for the data analysis of the {sup 18}F(p,α){sup 15}O reaction at astrophysical energies

    SciTech Connect

    Caruso, A.; Cherubini, S.; Spitaleri, C.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Crucillà, V.; Gulino, M.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; and others

    2015-02-24

    Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called 'narrow systems' because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of 'hot hydrogen burning' are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as {sup 13}N and {sup 18}F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of {sup 18}F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of {sup 18}F. Among these, the {sup 18}F(p,α){sup 15}O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the {sup 18}F(p,α){sup 15}O reaction, using a beam of {sup 18}F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code

  13. Development of a Monte Carlo code for the data analysis of the 18F(p,α)15O reaction at astrophysical energies

    NASA Astrophysics Data System (ADS)

    Caruso, A.; Cherubini, S.; Spitaleri, C.; Crucillà, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; de Séréville, N.

    2015-02-01

    Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called "narrow systems" because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of "hot hydrogen burning" are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as 13N and 18F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of 18F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of 18F . Among these, the 18F(p,α)15O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the 18F(p,α)15O reaction, using a beam of 18F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code developed to be used in the data analysis process.

  14. Approaching chemical accuracy with quantum Monte Carlo.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2012-03-28

    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space. PMID:22462844

  15. Studying the information content of TMDs using Monte Carlo generators

    SciTech Connect

    Avakian, H.; Matevosyan, H.; Pasquini, B.; Schweitzer, P.

    2015-02-05

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  16. Burnup calculation methodology in the serpent 2 Monte Carlo code

    SciTech Connect

    Leppaenen, J.; Isotalo, A.

    2012-07-01

    This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)

  17. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  18. Error modes in implicit Monte Carlo

    SciTech Connect

    Martin, William Russell,; Brown, F. B.

    2001-01-01

    The Implicit Monte Carlo (IMC) method of Fleck and Cummings [1] has been used for years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Larsen and Mercier [2] have shown that the IMC method violates a maximum principle that is satisfied by the exact solution to the radiative transfer equation. Except for [2] and related papers regarding the maximum principle, there have been no other published results regarding the analysis of errors or convergence properties for the IMC method. This work presents an exact error analysis for the IMC method by using the analytical solutions for infinite medium geometry (0-D) to determine closed form expressions for the errors. The goal is to gain insight regarding the errors inherent in the IMC method by relating the exact 0-D errors to multi-dimensional geometry. Additional work (not described herein) has shown that adding a leakage term (i.e., a 'buckling' term) to the 0-D equations has relatively little effect on the IMC errors analyzed in this paper, so that the 0-D errors should provide useful guidance for the errors observed in multi-dimensional simulations.

  19. Monte Carlo Production Management at CMS

    NASA Astrophysics Data System (ADS)

    Boudoul, G.; Franzoni, G.; Norkus, A.; Pol, A.; Srimanobhas, P.; Vlimant, J.-R.

    2015-12-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.

  20. Quantitative analysis of optical properties of flowing blood using a photon-cell interactive Monte Carlo code: effects of red blood cells' orientation on light scattering

    NASA Astrophysics Data System (ADS)

    Sakota, Daisuke; Takatani, Setsuo

    2012-05-01

    Optical properties of flowing blood were analyzed using a photon-cell interactive Monte Carlo (pciMC) model with the physical properties of the flowing red blood cells (RBCs) such as cell size, shape, refractive index, distribution, and orientation as the parameters. The scattering of light by flowing blood at the He-Ne laser wavelength of 632.8 nm was significantly affected by the shear rate. The light was scattered more in the direction of flow as the flow rate increased. Therefore, the light intensity transmitted forward in the direction perpendicular to flow axis decreased. The pciMC model can duplicate the changes in the photon propagation due to moving RBCs with various orientations. The resulting RBC's orientation that best simulated the experimental results was with their long axis perpendicular to the direction of blood flow. Moreover, the scattering probability was dependent on the orientation of the RBCs. Finally, the pciMC code was used to predict the hematocrit of flowing blood with accuracy of approximately 1.0 HCT%. The photon-cell interactive Monte Carlo (pciMC) model can provide optical properties of flowing blood and will facilitate the development of the non-invasive monitoring of blood in extra corporeal circulatory systems.

  1. Analysis of electron auroras based on the Monte Carlo method: Application to active electron arc auroras observed by the sounding rocket at Syowa Station

    NASA Astrophysics Data System (ADS)

    Onda, Kunizo; Ejiri, Masaki; Itikawa, Yukikazu

    A downward electron differential number flux, the absolute photoemission rate for the (0, 1) band of the first negative band system of the molecular nitrogen ion, and the number density of thermal electrons were simultaneously measured by the sounding rocket S-310JA-8 launched toward active auroral arcs at a substorm expansion phase on April 4, 1984, from Syowa Station in Antarctica. We apply the Monte Carlo method to analyze these observed results. The MSIS-86 model is employed to represent the atmospheric number density and temperature in the aurora observed by this experiment. Only N2, O, and O2 are taken into account as constituent elements of the atmosphere. Electrons are injected downward into the upper atmosphere at the altitude of 200 km, at which the downward electron differential number flux was measured. An initial electron energy is considered in the range of 0.1-18 keV. It is assumed that an initial pitch angle is uniformly distributed in the range of [0, π/2]. Excitation and ionization rates of N2, O, and O2 are calculated as a function of altitude, the initial pitch angle, and the initial electron energy. Production and emission rates of the N2+ 1N (0, 1) band are deduced by using these calculated rates. Time variation of the observed absolute intensity of this band is reasonably well reproduced by the Monte Carlo method combined with the measured electron number flux.

  2. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, S.C.

    1998-12-01

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed. {copyright} {ital 1998 American Institute of Physics.}

  3. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, Steven C.

    1998-12-21

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  4. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.

    1998-08-25

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H, {sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  5. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  6. Spatial Correlations in Monte Carlo Criticality Simulations

    NASA Astrophysics Data System (ADS)

    Dumonteil, E.; Malvagi, F.; Zoia, A.; Mazzolo, A.; Artusio, D.; Dieudonné, C.; De Mulatier, C.

    2014-06-01

    Temporal correlations arising in Monte Carlo criticality codes have focused the attention of both developers and practitioners for a long time. Those correlations affects the evaluation of tallies of loosely coupled systems, where the system's typical size is very large compared to the diffusion/absorption length scale of the neutrons. These time correlations are closely related to spatial correlations, both variables being linked by the transport equation. Therefore this paper addresses the question of diagnosing spatial correlations in Monte Carlo criticality simulations. In that aim, we will propose a spatial correlation function well suited to Monte Carlo simulations, and show its use while simulating a fuel pin-cell. The results will be discussed, modeled and interpreted using the tools of branching processes of statistical mechanics. A mechanism called "neutron clustering", affecting simulations, will be discussed in this frame.

  7. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  8. Monte Carlo and detector simulation in OOP (Object-Oriented Programming)

    SciTech Connect

    Atwood, W.B.; Blankenbecler, R.; Kunz, P. ); Burnett, T.; Storr, K.M. . ECP Div.)

    1990-10-01

    Object-Oriented Programming techniques are explored with an eye toward applications in High Energy Physics codes. Two prototype examples are given: McOOP (a particle Monte Carlo generator) and GISMO (a detector simulation/analysis package).

  9. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  10. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  11. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  12. Monte Carlo simulation framework for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Angeli, George Z.

    2008-07-01

    This presentation describes a strategy for assessing the performance of the Thirty Meter Telescope (TMT). A Monte Carlo Simulation Framework has been developed to combine optical modeling with Computational Fluid Dynamics simulations (CFD), Finite Element Analysis (FEA) and controls to model the overall performance of TMT. The framework consists of a two year record of observed environmental parameters such as atmospheric seeing, site wind speed and direction, ambient temperature and local sunset and sunrise times, along with telescope azimuth and elevation with a given sampling rate. The modeled optical, dynamic and thermal seeing aberrations are available in a matrix form for distinct values within the range of influencing parameters. These parameters are either part of the framework parameter set or can be derived from them at each time-step. As time advances, the aberrations are interpolated and combined based on the current value of their parameters. Different scenarios can be generated based on operating parameters such as venting strategy, optical calibration frequency and heat source control. Performance probability distributions are obtained and provide design guidance. The sensitivity of the system to design, operating and environmental parameters can be assessed in order to maximize the % of time the system meets the performance specifications.

  13. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  14. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  15. Geodesic Monte Carlo on Embedded Manifolds.

    PubMed

    Byrne, Simon; Girolami, Mark

    2013-12-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  16. Monte Carlo dose computation for IMRT optimization*

    NASA Astrophysics Data System (ADS)

    Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.

    2000-07-01

    A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.

  17. Monte Carlo simulation of an expanding gas

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1989-01-01

    By application of simple computer graphics techniques, the statistical performance of two Monte Carlo methods used in the simulation of rarefied gas flows are assessed. Specifically, two direct simulation Monte Carlo (DSMC) methods developed by Bird and Nanbu are considered. The graphics techniques are found to be of great benefit in the reduction and interpretation of the large volume of data generated, thus enabling important conclusions to be drawn about the simulation results. Hence, it is discovered that the method of Nanbu suffers from increased statistical fluctuations, thereby prohibiting its use in the solution of practical problems.

  18. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  19. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  20. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  1. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the geant4 Monte Carlo code

    PubMed Central

    Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe

    2015-01-01

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the

  2. MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

    SciTech Connect

    R. ESTEP; ET AL

    2000-06-01

    Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

  3. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  4. APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula

    SciTech Connect

    Hwang, M.; Bae, S.; Chung, B. D.

    2012-07-01

    An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)

  5. Noninvasive optical measurement of bone marrow lesions: a Monte Carlo study on visible human dataset

    NASA Astrophysics Data System (ADS)

    Su, Yu; Li, Ting

    2016-03-01

    Bone marrow is both the main hematopoietic and important immune organ. Bone marrow lesions (BMLs) may cause a series of severe complications and even myeloma. The traditional diagnosis of BMLs rely on mostly bone marrow biopsy/ puncture, and sometimes MRI, X-ray, and etc., which are either invasive and dangerous, or ionizing and costly. A diagnosis technology with advantages in noninvasive, safe, real-time continuous detection, and low cost is requested. Here we reported our preliminary exploration of feasibility verification of using near-infrared spectroscopy (NIRS) in clinical diagnosis of BMLs by Monte Carlo simulation study. We simulated and visualized the light propagation in the bone marrow quantitatively with a Monte Carlo simulation software for 3D voxelized media and Visible Chinese Human data set, which faithfully represents human anatomy. The results indicate that bone marrow actually has significant effects on light propagation. According to a sequence of simulation and data analysis, the optimal source-detector separation was suggested to be narrowed down to 2.8-3.2cm, at which separation the spatial sensitivity distribution of NIRS cover the most region of bone marrow with high signal-to-noise ratio. The display of the sources and detectors were optimized as well. This study investigated the light transport in spine addressing to the BMLs detection issue and reported the feasibility of NIRS detection of BMLs noninvasively in theory. The optimized probe design of the coming NIRS-based BMLs detector is also provided.

  6. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.

  7. Influence of a fat layer on the near infrared spectra of human muscle: quantitative analysis based on two-layered Monte Carlo simulations and phantom experiments

    NASA Technical Reports Server (NTRS)

    Yang, Ye; Soyemi, Olusola O.; Landry, Michelle R.; Soller, Babs R.

    2005-01-01

    The influence of fat thickness on the diffuse reflectance spectra of muscle in the near infrared (NIR) region is studied by Monte Carlo simulations of a two-layer structure and with phantom experiments. A polynomial relationship was established between the fat thickness and the detected diffuse reflectance. The influence of a range of optical coefficients (absorption and reduced scattering) for fat and muscle over the known range of human physiological values was also investigated. Subject-to-subject variation in the fat optical coefficients and thickness can be ignored if the fat thickness is less than 5 mm. A method was proposed to correct the fat thickness influence. c2005 Optical Society of America.

  8. Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.

    2003-11-15

    The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.

  9. On Monte Carlo Methods and Applications in Geoscience

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Blais, J.

    2009-05-01

    Monte Carlo methods are designed to study various deterministic problems using probabilistic approaches, and with computer simulations to explore much wider possibilities for the different algorithms. Pseudo- Random Number Generators (PRNGs) are based on linear congruences of some large prime numbers, while Quasi-Random Number Generators (QRNGs) provide low discrepancy sequences, both of which giving uniformly distributed numbers in (0,1). Chaotic Random Number Generators (CRNGs) give sequences of 'random numbers' satisfying some prescribed probabilistic density, often denser around the two corners of interval (0,1), but transforming this type of density to a uniform one is usually possible. Markov Chain Monte Carlo (MCMC), as indicated by its name, is associated with Markov Chain simulations. Basic descriptions of these random number generators will be given, and a comparative analysis of these four methods will be included based on their efficiencies and other characteristics. Some applications in geoscience using Monte Carlo simulations will be described, and a comparison of these algorithms will also be included with some concluding remarks.

  10. A Monte Carlo multimodal inversion of surface waves

    NASA Astrophysics Data System (ADS)

    Maraschini, Margherita; Foti, Sebastiano

    2010-09-01

    The analysis of surface wave propagation is often used to estimate the S-wave velocity profile at a site. In this paper, we propose a stochastic approach for the inversion of surface waves, which allows apparent dispersion curves to be inverted. The inversion method is based on the integrated use of two-misfit functions. A misfit function based on the determinant of the Haskell-Thomson matrix and a classical Euclidean distance between the dispersion curves. The former allows all the modes of the dispersion curve to be taken into account with a very limited computational cost because it avoids the explicit calculation of the dispersion curve for each tentative model. It is used in a Monte Carlo inversion with a large population of profiles. In a subsequent step, the selection of representative models is obtained by applying a Fisher test based on the Euclidean distance between the experimental and the synthetic dispersion curves to the best models of the Monte Carlo inversion. This procedure allows the set of the selected models to be identified on the basis of the data quality. It also mitigates the influence of local minima that can affect the Monte Carlo results. The effectiveness of the procedure is shown for synthetic and real experimental data sets, where the advantages of the two-stage procedure are highlighted. In particular, the determinant misfit allows the computation of large populations in stochastic algorithms with a limited computational cost.

  11. Improved diffusion coefficients generated from Monte Carlo codes

    SciTech Connect

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-07-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  12. A quasi-Monte Carlo Metropolis algorithm

    PubMed Central

    Owen, Art B.; Tribble, Seth D.

    2005-01-01

    This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207

  13. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  14. A comparison of Monte Carlo generators

    NASA Astrophysics Data System (ADS)

    Golan, Tomasz

    2015-05-01

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π+ two-dimensional energy vs cosine distribution.

  15. Monte Carlo simulations of lattice gauge theories

    SciTech Connect

    Rebbi, C

    1980-02-01

    Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)

  16. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  17. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  18. SU-E-T-235: Monte Carlo Analysis of the Dose Enhancement in the Scalp of Patients Due to Titanium Plate Backscatter During Post-Operative Radiotherapy

    SciTech Connect

    Hardin, M; Elson, H; Lamba, M; Wolf, E; Warnick, R

    2014-06-01

    Purpose: To quantify the clinically observed dose enhancement adjacent to cranial titanium fixation plates during post-operative radiotherapy. Methods: Irradiation of a titanium burr hole cover was simulated using Monte Carlo code MCNPX for a 6 MV photon spectrum to investigate backscatter dose enhancement due to increased production of secondary electrons within the titanium plate. The simulated plate was placed 3 mm deep in a water phantom, and dose deposition was tallied for 0.2 mm thick cells adjacent to the entrance and exit sides of the plate. These results were compared to a simulation excluding the presence of the titanium to calculate relative dose enhancement on the entrance and exit sides of the plate. To verify simulated results, two titanium burr hole covers (Synthes, Inc. and Biomet, Inc.) were irradiated with 6 MV photons in a solid water phantom containing GafChromic MD-55 film. The phantom was irradiated on a Varian 21EX linear accelerator at multiple gantry angles (0–180 degrees) to analyze the angular dependence of the backscattered radiation. Relative dose enhancement was quantified using computer software. Results: Monte Carlo simulations indicate a relative difference of 26.4% and 7.1% on the entrance and exit sides of the plate respectively. Film dosimetry results using a similar geometry indicate a relative difference of 13% and -10% on the entrance and exit sides of the plate respectively. Relative dose enhancement on the entrance side of the plate decreased with increasing gantry angle from 0 to 180 degrees. Conclusion: Film and simulation results demonstrate an increase in dose to structures immediately adjacent to cranial titanium fixation plates. Increased beam obliquity has shown to alleviate dose enhancement to some extent. These results are consistent with clinically observed effects.

  19. Application of the Theory of Functional Monte Carlo Algorithms to Optimization of the DSMC Method

    NASA Astrophysics Data System (ADS)

    Plotnikov, M. Yu.; Shkarupa, E. V.

    2008-12-01

    Some approaches to error analysis and optimization of the Direct Simulation Monte Carlo method are presented. The main idea of this work is the construction of the relations between the sample size and the number of cells which guarantee the attainment of the given error level on the base of the theory of functional Monte Carlo algorithms. The optimal (in the sense of the obtained upper error bound) values of the sample size and the number of cells are constructed.

  20. Time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B

    2009-01-01

    We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

  1. Composite sequential Monte Carlo test for post-market vaccine safety surveillance.

    PubMed

    Silva, Ivair R

    2016-04-30

    Group sequential hypothesis testing is now widely used to analyze prospective data. If Monte Carlo simulation is used to construct the signaling threshold, the challenge is how to manage the type I error probability for each one of the multiple tests without losing control on the overall significance level. This paper introduces a valid method for a true management of the alpha spending at each one of a sequence of Monte Carlo tests. The method also enables the use of a sequential simulation strategy for each Monte Carlo test, which is useful for saving computational execution time. Thus, the proposed procedure allows for sequential Monte Carlo test in sequential analysis, and this is the reason that it is called 'composite sequential' test. An upper bound for the potential power losses from the proposed method is deduced. The composite sequential design is illustrated through an application for post-market vaccine safety surveillance data. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26561330

  2. Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy

    SciTech Connect

    Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W

    2002-02-19

    Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.

  3. Monte Carlo simulation of light propagation in the adult brain

    NASA Astrophysics Data System (ADS)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter

    2004-06-01

    When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.

  4. Extending Diffusion Monte Carlo to Internal Coordinates

    NASA Astrophysics Data System (ADS)

    Petit, Andrew S.; McCoy, Anne B.

    2013-06-01

    Diffusion Monte Carlo (DMC) is a powerful technique for studying the properties of molecules and clusters that undergo large-amplitude, zero-point vibrational motions. However, the overall applicability of the method is limited by the need to work in Cartesian coordinates and therefore have available a full-dimensional potential energy surface (PES). As a result, the development of a reduced-dimensional DMC methodology has the potential to significantly extend the range of problems that DMC can address by allowing the calculations to be performed in the subset of coordinates that is physically relevant to the questions being asked, thereby eliminating the need for a full-dimensional PES. As a first step towards this goal, we describe here an internal coordinate extension of DMC that places no constraints on the choice of internal coordinates other than requiring them all to be independent. Using H_3^+ and its isotopologues as model systems, we demonstrate that the methodology is capable of successfully describing the ground state properties of highly fluxional molecules as well as, in conjunction with the fixed-node approximation, the ν=1 vibrationally excited states. The calculations of the fundamentals of H_3^+ and its isotopologues provided general insights into the properties of the nodal surfaces of vibrationally excited states. Specifically, we will demonstrate that analysis of ground state probability distributions can point to the set of coordinates that are less strongly coupled and therefore more suitable for use as nodal coordinates in the fixed-node approximation. In particular, we show that nodal surfaces defined in terms of the curvilinear normal mode coordinates are reasonable for the fundamentals of H_2D^+ and D_2H^+ despite both molecules being highly fluxional.

  5. Quantum Monte Carlo study of quasi-one-dimensional Bose gases

    NASA Astrophysics Data System (ADS)

    Astrakharchik, G. E.; Blume, D.; Giorgini, S.; Granger, B. E.

    2004-04-01

    We study the behaviour of quasi-one-dimensional (quasi-1D) Bose gases by Monte Carlo techniques, i.e. by the variational Monte Carlo, the diffusion Monte Carlo and the fixed-node diffusion Monte Carlo techniques. Our calculations confirm and extend our results of an earlier study (Astrakharchik et al 2003 Preprint cond-mat/0308585). We find that a quasi-1D Bose gas (i) is well described by a 1D model Hamiltonian with contact interactions and renormalized coupling constant; (ii) reaches the Tonks-Girardeau regime for a critical value of the 3D scattering length a3D; (iii) enters a unitary regime for |a3D| rarr infin, where the properties of the gas are independent of a3D and are similar to those of a 1D gas of hard-rods and (iv) becomes unstable against cluster formation for a critical value of the 1D gas parameter. The accuracy and implications of our results are discussed in detail.

  6. The simulation of radar and coherent backscattering with the Monte Carlo model MYSTIC

    NASA Astrophysics Data System (ADS)

    Pause, Christian; Buras, Robert; Emde, Claudia; Mayer, Bernhard

    2013-05-01

    A new method to simulate radar and coherent backscattering within the framework of the 3D Monte Carlo radiative transfer model MYSTIC has been developed. Simulating radar is solved with the help of the already existing lidar simulator. Therefore the larger part of this paper presents a solution to simulate coherent backscattering and shows a comparison to a real case.

  7. Source description and sampling techniques in PEREGRINE Monte Carlo calculations of dose distributions for radiation oncology

    SciTech Connect

    Schach von Wittenau, A.E.; Cox, L.J.; Bergstrom, P.H., Jr.; Chandler, W.P.; Hartmann-Siantar, C.L.; Hornstein, S.M.

    1997-10-31

    We outline the techniques used within PEREGRINE, a 3D Monte Carlo code calculation system, to model the photon output from medical accelerators. We discuss the methods used to reduce the phase-space data to a form that is accurately and efficiently sampled.

  8. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the GEANT4 Monte Carlo code

    SciTech Connect

    Guan, Fada; Peeler, Christopher; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Mohan, Radhe; Titt, Uwe; Bronk, Lawrence; Geng, Changran; Grosshans, David

    2015-11-15

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to

  9. Measurement comparison and Monte Carlo analysis for volumetric-modulated arc therapy (VMAT) delivery verification using the ArcCHECK dosimetry system.

    PubMed

    Lin, Mu-Han; Koren, Sion; Veltchev, Iavor; Li, Jinsheng; Wang, Lu; Price, Robert A; Ma, C-M

    2013-01-01

    The objective of this study is to validate the capabilities of a cylindrical diode array system for volumetric-modulated arc therapy (VMAT) treatment quality assurance (QA). The VMAT plans were generated by the Eclipse treatment planning system (TPS) with the analytical anisotropic algorithm (AAA) for dose calculation. An in-house Monte Carlo (MC) code was utilized as a validation tool for the TPS calculations and the ArcCHECK measurements. The megavoltage computed tomography (MVCT) of the ArcCHECK system was adopted for the geometry reconstruction in the TPS and for MC simulations. A 10 × 10 cm2 open field validation was performed for both the 6 and 10 MV photon beams to validate the absolute dose calibration of the ArcCHECK system and also the TPS dose calculations for this system. The impact of the angular dependency on noncoplanar deliveries was investigated with a series of 10 × 10 cm2 fields delivered with couch rotation 0° to 40°. The sensitivity of detecting the translational (1 to 10 mm) and the rotational (1° to 3°) misalignments was tested with a breast VMAT case. Ten VMAT plans (six prostate, H&N, pelvis, liver, and breast) were investigated to evaluate the agreement of the target dose and the peripheral dose among ArcCHECK measurements, and TPS and MC dose calculations. A customized acrylic plug holding an ion chamber was used to measure the dose at the center of the ArcCHECK phantom. Both the entrance and the exit doses measured by the ArcCHECK system with and without the plug agreed with the MC simulation to 1.0%. The TPS dose calculation with a 2.5 mm grid overestimated the exit dose by up to 7.2% when the plug was removed. The agreement between the MC and TPS calculations for the ArcCHECK without the plug improved significantly when a 1 mm dose calculation grid was used in the TPS. The noncoplanar delivery test demonstrated that the angular dependency has limited impact on the gamma passing rate (< 1.2% drop) for the 2%-3% dose and 2mm-3 mm

  10. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  11. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-11-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal. Dose

  12. A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.

  13. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.; Physics

    2008-01-01

    Variational Monte Carlo and Green's function Monte Carlo are powerful tools for cal- culations of properties of light nuclei using realistic two-nucleon (NN) and three-nucleon (NNN) potentials. Recently the GFMC method has been extended to multiple states with the same quantum numbers. The combination of the Argonne v18 two-nucleon and Illinois-2 three-nucleon potentials gives a good prediction of many energies of nuclei up to 12 C. A number of other recent results are presented: comparison of binding energies with those obtained by the no-core shell model; the incompatibility of modern nuclear Hamiltonians with a bound tetra-neutron; difficulties in computing RMS radii of very weakly bound nuclei, such as 6He; center-of-mass effects on spectroscopic factors; and the possible use of an artificial external well in calculations of neutron-rich isotopes.

  14. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  15. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927

  16. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  17. Quantum Monte Carlo calculations for carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Lähde, Timo A.

    2016-04-01

    We show how lattice quantum Monte Carlo can be applied to the electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path-integral formalism and use methods developed within the lattice QCD community for our numerical work. Our lattice Hamiltonian is closely related to the hexagonal Hubbard model augmented by a long-range electron-electron interaction. We apply our method to the single-quasiparticle spectrum of the (3,3) armchair nanotube configuration, and consider the effects of strong electron-electron correlations. Our approach is equally applicable to other nanotubes, as well as to other carbon nanostructures. We benchmark our Monte Carlo calculations against the two- and four-site Hubbard models, where a direct numerical solution is feasible.

  18. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  19. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  20. Fast Lattice Monte Carlo Simulations of Polymers

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Pengfei

    2014-03-01

    The recently proposed fast lattice Monte Carlo (FLMC) simulations (with multiple occupancy of lattice sites (MOLS) and Kronecker δ-function interactions) give much faster/better sampling of configuration space than both off-lattice molecular simulations (with pair-potential calculations) and conventional lattice Monte Carlo simulations (with self- and mutual-avoiding walk and nearest-neighbor interactions) of polymers.[1] Quantitative coarse-graining of polymeric systems can also be performed using lattice models with MOLS.[2] Here we use several model systems, including polymer melts, solutions, blends, as well as confined and/or grafted polymers, to demonstrate the great advantages of FLMC simulations in the study of equilibrium properties of polymers.