Science.gov

Sample records for 3-d monte-carlo analysis

  1. Monte Carlo Reliability Analysis.

    DTIC Science & Technology

    1987-10-01

    to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction

  2. Recovering 3D images of polymeric nanofibers in solution through theoretical analysis and Monte-Carlo simulations of their 2D TEM images.

    PubMed

    Miao, Han; Li, Jianfeng; Chen, Daoyong

    2016-05-18

    Nanofibers are well-known nanomaterials that are promising for many important applications. Since sample preparation for the applications usually starts from a nanofiber solution, characterization of the original conformation of nanofibers in the solution is significant because the conformation affects remarkably the behavior of nanofibers in the samples. However, the characterization is very difficult by existing methods: light scattering can only roughly evaluate the conformation in solution; cryo-TEM is laborious, time-consuming, and challenging technically, and thus difficult to study a system statistically. Herein we report a novel and reliable method to recover the 3D original image of nanofibers in solution through theoretical analysis and Monte-Carlo simulations of TEM images of the nanofibers. Firstly, six kinds of monodisperse nanofibers with the same composition and inner structure but different contour lengths were prepared by the method developed in our laboratory. Then, each kind of nanofiber deposited on the substrate of the TEM sample was measured by TEM and meanwhile simulated by the Monte Carlo method. By matching the simulation results with the TEM results, we determined information about the nanofibers including their rigidity and the interaction between the nanofibers and the substrate. Furthermore, for each kind of nanofiber, based on the information, 3D images of the nanofibers in solution can be re-constructed, and then the average gyration radius and hydrodynamic radius can be calculated, which were compared with the corresponding values measured experimentally to demonstrate the reliability of this method.

  3. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  4. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  5. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  6. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  7. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  8. Monte Carlo generators for studies of the 3D structure of the nucleon

    DOE PAGES

    Avakian, Harut; D'Alesio, U.; Murgia, F.

    2015-01-23

    In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.

  9. Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units.

    PubMed

    Fang, Qianqian; Boas, David A

    2009-10-26

    We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging.

  10. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  11. Monte Carlo Modeling of Thin Film Deposition: Factors that Influence 3D Islands

    SciTech Connect

    Gilmer, G H; Dalla Torre, J; Baumann, F H; Diaz de la Rubia, T

    2002-01-04

    In this paper we discuss the use of atomistic Monte Carlo simulations to predict film microstructure evolution. We discuss physical vapor deposition, and are primarily concerned with films that are formed by the nucleation and coalescence of 3D islands. Multi-scale modeling is used in the sense that information obtained from molecular dynamics and first principles calculations provide atomic interaction energies, surface and grain boundary properties and diffusion rates for use in the Monte Carlo model. In this paper, we discuss some fundamental issues associated with thin film formation, together with an assessment of the sensitivity of the film morphology to the deposition conditions and materials properties.

  12. PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, T.J.

    1998-12-01

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  13. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, Timothy J.

    1998-01-13

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  14. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  15. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  16. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    PubMed

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  17. 3D dose distribution calculation in a voxelized human phantom by means of Monte Carlo method.

    PubMed

    Abella, V; Miró, R; Juste, B; Verdú, G

    2010-01-01

    The aim of this work is to provide the reconstruction of a real human voxelized phantom by means of a MatLab program and the simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, by using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project results in 3D dose mapping calculations inside the voxelized antropomorphic head phantom. The program provides the voxelization by first processing the CT slices; the process follows a two-dimensional pixel and material identification algorithm on each slice and three-dimensional interpolation in order to describe the phantom geometry via small cubic cells, resulting in an MCNP input deck format output. Dose rates are calculated by using the MCNP5 tool FMESH, superimposed mesh tally, which gives the track length estimation of the particle flux in units of particles/cm(2). Furthermore, the particle flux is converted into dose by using the conversion coefficients extracted from the NIST Physical Reference Data. The voxelization using a three-dimensional interpolation technique in combination with the use of the FMESH tool of the MCNP Monte Carlo code offers an optimal simulation which results in 3D dose mapping calculations inside anthropomorphic phantoms. This tool is very useful in radiation treatment assessments, in which voxelized phantoms are widely utilized. Copyright 2009 Elsevier Ltd. All rights reserved.

  18. Full Configuration Interaction Quantum Monte Carlo and Diffusion Monte Carlo: A Comparative Study of the 3D Homogeneous Electron Gas

    NASA Astrophysics Data System (ADS)

    Shepherd, James J.; López Ríos, Pablo; Needs, Richard J.; Drummond, Neil D.; Mohr, Jennifer A.-F.; Booth, George H.; Grüneis, Andreas; Kresse, Georg; Alavi, Ali

    2013-03-01

    Full configuration interaction quantum Monte Carlo1 (FCIQMC) and its initiator adaptation2 allow for exact solutions to the Schrödinger equation to be obtained within a finite-basis wavefunction ansatz. In this talk, we explore an application of FCIQMC to the homogeneous electron gas (HEG). In particular we use these exact finite-basis energies to compare with approximate quantum chemical calculations from the VASP code3. After removing the basis set incompleteness error by extrapolation4,5, we compare our energies with state-of-the-art diffusion Monte Carlo calculations from the CASINO package6. Using a combined approach of the two quantum Monte Carlo methods, we present the highest-accuracy thermodynamic (infinite-particle) limit energies for the HEG achieved to date. 1 G. H. Booth, A. Thom, and A. Alavi, J. Chem. Phys. 131, 054106 (2009). 2 D. Cleland, G. H. Booth, and A. Alavi, J. Chem. Phys. 132, 041103 (2010). 3 www.vasp.at (2012). 4 J. J. Shepherd, A. Grüneis, G. H. Booth, G. Kresse, and A. Alavi, Phys. Rev. B. 86, 035111 (2012). 5 J. J. Shepherd, G. H. Booth, and A. Alavi, J. Chem. Phys. 136, 244101 (2012). 6 R. Needs, M. Towler, N. Drummond, and P. L. Ríos, J. Phys.: Condensed Matter 22, 023201 (2010).

  19. 3-D Direct Simulation Monte Carlo modeling of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Su, C.; Finklenburg, S.; Rubin, M.; Ip, W.; Keller, H.; Knollenberg, J.; Kührt, E.; Lai, I.; Skorov, Y.; Thomas, N.; Wu, J.; Chen, Y.

    2014-07-01

    After deep-space hibernation, ESA's Rosetta spacecraft has been successfully woken up and obtained the first images of comet 67P /Churyumov-Gerasimenko (C-G) in March 2014. It is expected that Rosetta will rendezvous with comet 67P and start to observe the nucleus and coma of the comet in the middle of 2014. As the comet approaches the Sun, a significant increase in activity is expected. Our aim is to understand the physical processes in the coma with the help of modeling in order to interpret the resulting measurements and establish observational and data analysis strategies. DSMC (Direct Simulation Monte Carlo) [1] is a very powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow [2,3]. Comparisons between DSMC and fluid techniques have also been performed to establish the limits of these techniques [2,4]. The drawback with 3D DSMC is that it is computationally highly intensive and thus time consuming. However, the performance can be dramatically increased with parallel computing on Graphic Processor Units (GPUs) [5]. We have already studied a case with comet 9P/Tempel 1 where the Deep Impact observations were used to define the shape of the nucleus and the outflow was simulated with the DSMC approach [6,7]. For comet 67P, we intend to determine the gas flow field in the innermost coma and the surface outgassing properties from analyses of the flow field, to investigate dust acceleration by gas drag, and to compare with observations (including time variability). The boundary conditions are implemented with a nucleus shape model [8] and thermal models which are based on the surface heat-balance equation. Several different parameter sets have been investigated. The calculations have been performed using the PDSC^{++} (Parallel Direct Simulation Monte Carlo) code [9] developed by Wu and his coworkers [10-12]. Simulation tasks can be accomplished within 24

  20. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  1. A graphical user interface for calculation of 3D dose distribution using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chow, J. C. L.; Leung, M. K. K.

    2008-02-01

    A software graphical user interface (GUI) for calculation of 3D dose distribution using Monte Carlo (MC) simulation is developed using MATLAB. This GUI (DOSCTP) provides a user-friendly platform for DICOM CT-based dose calculation using EGSnrcMP-based DOSXYZnrc code. It offers numerous features not found in DOSXYZnrc, such as the ability to use multiple beams from different phase-space files, and has built-in dose analysis and visualization tools. DOSCTP is written completely in MATLAB, with integrated access to DOSXYZnrc and CTCREATE. The program function may be divided into four subgroups, namely, beam placement, MC simulation with DOSXYZnrc, dose visualization, and export. Each is controlled by separate routines. The verification of DOSCTP was carried out by comparing plans with different beam arrangements (multi-beam/photon arc) on an inhomogeneous phantom as well as patient CT between the GUI and Pinnacle3. DOSCTP was developed and verified with the following features: (1) a built-in voxel editor to modify CT-based DOSXYZnrc phantoms for research purposes; (2) multi-beam placement is possible, which cannot be achieved using the current DOSXYZnrc code; (3) the treatment plan, including the dose distributions, contours and image set can be exported to a commercial treatment planning system such as Pinnacle3 or to CERR using RTOG format for plan evaluation and comparison; (4) a built-in RTOG-compatible dose reviewer for dose visualization and analysis such as finding the volume of hot/cold spots in the 3D dose distributions based on a user threshold. DOSCTP greatly simplifies the use of DOSXYZnrc and CTCREATE, and offers numerous features that not found in the original user-code. Moreover, since phase-space beams can be defined and generated by the user, it is a particularly useful tool to carry out plans using specifically designed irradiators/accelerators that cannot be found in the Linac library of commercial treatment planning systems.

  2. Markov chain Monte Carlo linkage analysis of complex quantitative phenotypes.

    PubMed

    Hinrichs, A; Reich, T

    2001-01-01

    We report a Markov chain Monte Carlo analysis of the five simulated quantitative traits in Genetic Analysis Workshop 12 using the Loki software. Our objectives were to determine the efficacy of the Markov chain Monte Carlo method and to test a new scoring technique. Our initial blind analysis, on replicate 42 (the "best replicate") successfully detected four out of the five disease loci and found no false positives. A power analysis shows that the software could usually detect 4 of the 10 trait/gene combinations at an empirical point-wise p-value of 1.5 x 10(-4).

  3. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  4. 3D Monte Carlo model with direct photon flux recording for optimal optogenetic light delivery

    NASA Astrophysics Data System (ADS)

    Shin, Younghoon; Kim, Dongmok; Lee, Jihoon; Kwon, Hyuk-Sang

    2017-02-01

    Configuring the light power emitted from the optical fiber is an essential first step in planning in-vivo optogenetic experiments. However, diffusion theory, which was adopted for optogenetic research, precluded accurate estimates of light intensity in the semi-diffusive region where the primary locus of the stimulation is located. We present a 3D Monte Carlo model that provides an accurate and direct solution for light distribution in this region. Our method directly records the photon trajectory in the separate volumetric grid planes for the near-source recording efficiency gain, and it incorporates a 3D brain mesh to support both homogeneous and heterogeneous brain tissue. We investigated the light emitted from optical fibers in brain tissue in 3D, and we applied the results to design optimal light delivery parameters for precise optogenetic manipulation by considering the fiber output power, wavelength, fiber-to-target distance, and the area of neural tissue activation.

  5. A Monte Carlo method for combined segregation and linkage analysis

    SciTech Connect

    Guo, S.W. ); Thompson, E.A. )

    1992-11-01

    The authors introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. 40 refs, 5 figs., 5 tabs.

  6. Development, validation, and implementation of a patient-specific Monte Carlo 3D internal dosimetry platform

    NASA Astrophysics Data System (ADS)

    Besemer, Abigail E.

    Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre

  7. Photon propagation correction in 3D photoacoustic image reconstruction using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Cheong, Yaw Jye; Stantz, Keith M.

    2010-02-01

    Purpose: The purpose of this study is to develop a new 3-D iterative Monte Carlo algorithm to recover the heterogeneous distribution of molecular absorbers with a solid tumor. Introduction: Spectroscopic imaging (PCT-S) has the potential to identify a molecular species and quantify its concentration with high spatial fidelity. To accomplish this task, tissue attenuation losses during photon propagation in heterogeneous 3D objects is necessary. An iterative recovery algorithm has been developed to extract 3D heterogeneous parametric maps of absorption coefficients implementing a MC algorithm based on a single source photoacoustic scanner and to determine the influence of the reduced scattering coefficient on the uncertainty of recovered absorption coefficient. Material and Methods: This algorithm is tested for spheres and ellipsoids embedded in simulated mouse torso with optical absorption values ranging from 0.01-0.5/cm, for the same objects where the optical scattering is unknown (μs'=7-13/cm), and for a heterogeneous distribution of absorbers. Results: Systemic and statistical errors in ma with a priori knowledge of μs' and g are <2% (sphere) and <4% (ellipsoid) for all ma and without a priori knowledge of ms' is <3% and <6%. For heterogenenous distributions of ma, errors are <4% and <5.5% for each object with a prior knowledge of ms' and g, and to 7 and 14% when μs' varied from 7-13/cm. Conclusions: A Monte Carlo code has been successfully developed and used to correct for photon propagation effects in simulated objects consistent with tumors.

  8. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  9. Venus resurfacing rates: Constraints provided by 3-D Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Bullock, M. A.; Grinspoon, D. H.; Head, J. W.

    1993-01-01

    A 3-D Monte Carlo model that simulates the evolving surface of Venus under the influence of a flux of impacting objects and a variety of styles of volcanic resurfacing was implemented. For given rates of impact events and resurfacing, the model predicts the size-frequency and areal distributions of surviving impact craters as a function of time. The number of craters partially modified by volcanic events is also calculated as the surface evolves. It was found that a constant, global resurfacing rate of approximately 0.4 km(sup 3)/yr is required to explain the observed distributions of both the entire crater population, and the population of craters partially modified by volcanic processes.

  10. Kinetic Monte Carlo simulation of 3-D growth of NiTi alloy thin films

    NASA Astrophysics Data System (ADS)

    Zhu, Yiguo; Pan, Xi

    2014-12-01

    In this paper, a 3-D Monte Carlo model for NiTi alloy thin film growth on square lattice substrate is presented. The model is based on the description of the phenomenon in terms of adsorption, diffusion and re-evaporation of different atoms on the substrate surface. In this article, multi-body NiTi potential is used to calculate diffusion activation energy. The energy which is related to the types of the atoms is equal to the total energy change of the system before and after the diffusion process happens. The simulations serve the purpose of investigation of the role of diffusion in the determination of the microstructure of the alloy clusters. The effects of the substrate temperature and the deposition rate on the morphology of the island are also presented. The island size distribution and roughness evolution have been computed and compared with our experimental results.

  11. Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems

    SciTech Connect

    Martinez, E.; Monasterio, P.R.; Marian, J.

    2011-02-20

    An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.

  12. Monte-Carlo-based inversion scheme for 3D quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Kaplan, Bernhard A.; Buchmann, Jens; Prohaska, Steffen; Laufer, Jan

    2017-03-01

    The goal of quantitative photoacoustic tomography (qPAT) is to recover maps of the chromophore distributions from multiwavelength images of the initial pressure. Model-based inversions that incorporate the physical processes underlying the photoacoustic (PA) signal generation represent a promising approach. Monte-Carlo models of the light transport are computationally expensive, but provide accurate fluence distributions predictions, especially in the ballistic and quasi-ballistic regimes. Here, we focus on the inverse problem of 3D qPAT of blood oxygenation and investigate the application of the Monte-Carlo method in a model-based inversion scheme. A forward model of the light transport based on the MCX simulator and acoustic propagation modeled by the k-Wave toolbox was used to generate a PA image data set acquired in a tissue phantom over a planar detection geometry. The combination of the optical and acoustic models is shown to account for limited-view artifacts. In addition, the errors in the fluence due to, for example, partial volume artifacts and absorbers immediately adjacent to the region of interest are investigated. To accomplish large-scale inversions in 3D, the number of degrees of freedom is reduced by applying image segmentation to the initial pressure distribution to extract a limited number of regions with homogeneous optical parameters. The absorber concentration in the tissue phantom was estimated using a coordinate descent parameter search based on the comparison between measured and modeled PA spectra. The estimated relative concentrations using this approach lie within 5 % compared to the known concentrations. Finally, we discuss the feasibility of this approach to recover the blood oxygenation from experimental data.

  13. OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics

    PubMed Central

    Liu, Yuming; Jacques, Steven L.; Azimipour, Mehdi; Rogers, Jeremy D.; Pashaie, Ramin; Eliceiri, Kevin W.

    2015-01-01

    Optimizing light delivery for optogenetics is critical in order to accurately stimulate the neurons of interest while reducing nonspecific effects such as tissue heating or photodamage. Light distribution is typically predicted using the assumption of tissue homogeneity, which oversimplifies light transport in heterogeneous brain. Here, we present an open-source 3D simulation platform, OptogenSIM, which eliminates this assumption. This platform integrates a voxel-based 3D Monte Carlo model, generic optical property models of brain tissues, and a well-defined 3D mouse brain tissue atlas. The application of this platform in brain data models demonstrates that brain heterogeneity has moderate to significant impact depending on application conditions. Estimated light density contours can show the region of any specified power density in the 3D brain space and thus can help optimize the light delivery settings, such as the optical fiber position, fiber diameter, fiber numerical aperture, light wavelength and power. OptogenSIM is freely available and can be easily adapted to incorporate additional brain atlases. PMID:26713200

  14. 3D electro-thermal Monte Carlo study of transport in confined silicon devices

    NASA Astrophysics Data System (ADS)

    Mohamed, Mohamed Y.

    The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non

  15. Monte Carlo analysis of satellite debris footprint dispersion

    NASA Technical Reports Server (NTRS)

    Rao, P. P.; Woeste, M. A.

    1979-01-01

    A comprehensive study is performed to investigate satellite debris impact point dispersion using a combination of Monte Carlo statistical analysis and parametric methods. The Monte Carlo technique accounts for nonlinearities in the entry point dispersion, which is represented by a covariance matrix of position and velocity errors. Because downrange distance of impact is a monotonic function of debris ballistic coefficient, a parametric method is useful for determining dispersion boundaries. The scheme is applied in the present analysis to estimate the Skylab footprint dispersions for a controlled reentry. A significant increase in the footprint dispersion is noticed for satellite breakup above a 200,000-ft altitude. A general discussion of the method used for analysis is presented together with some typical results obtained for the Skylab deboost mission, which was designed before NASA abandoned plans for a Skylab controlled reentry.

  16. Self-assembly of ABC triblock copolymers under 3D soft confinement: a Monte Carlo study.

    PubMed

    Yan, Nan; Zhu, Yutian; Jiang, Wei

    2016-01-21

    Under three-dimensional (3D) soft confinement, block copolymers can self-assemble into unique nanostructures that cannot be fabricated in an un-confined space. Linear ABC triblock copolymers containing three chemically distinct polymer blocks possess relatively complex chain architecture, which can be a promising candidate for the 3D confined self-assembly. In the current study, the Monte Carlo technique was applied in a lattice model to study the self-assembly of ABC triblock copolymers under 3D soft confinement, which corresponds to the self-assembly of block copolymers confined in emulsion droplets. We demonstrated how to create various nanostructures by tuning the symmetry of ABC triblock copolymers, the incompatibilities between different block types, and solvent properties. Besides common pupa-like and bud-like nanostructures, our simulations predicted various unique self-assembled nanostructures, including a striped-pattern nanoparticle with intertwined A-cages and C-cages, a pyramid-like nanoparticle with four Janus B-C lamellae adhered onto its four surfaces, an ellipsoidal nanoparticle with a dumbbell-like A-core and two Janus B-C lamellae and a Janus B-C ring surrounding the A-core, a spherical nanoparticle with a A-core and a helical Janus B-C stripe around the A-core, a cubic nanoparticle with a cube-shape A-core and six Janus B-C lamellae adhered onto the surfaces of the A-cube, and a spherical nanoparticle with helical A, B and C structures, from the 3D confined self-assembly of ABC triblock copolymers. Moreover, the formation mechanisms of some typical nanostructures were also examined by the variations of the contact numbers with time and a series of snapshots at different Monte Carlo times. It is found that ABC triblock copolymers usually aggregate into a loose aggregate at first, and then the microphase separation between A, B and C blocks occurs, resulting in the formation of various nanostructures.

  17. 3D Monte Carlo simulation of light propagation for laser acupuncture and optimization of illumination parameters

    NASA Astrophysics Data System (ADS)

    Zhong, Fulin; Li, Ting; Pan, Boan; Wang, Pengbo

    2017-02-01

    Laser acupuncture is an effective photochemical and nonthermal stimulation of traditional acupuncture points with lowintensity laser irradiation, which is advantageous in painless, sterile, and safe compared to traditional acupuncture. Laser diode (LD) provides single wavelength and relatively-higher power light for phototherapy. The quantitative effect of illumination parameters of LD in use of laser acupuncture is crucial for practical operation of laser acupuncture. However, this issue is not fully demonstrated, especially since experimental methodologies with animals or human are pretty hard to address to this issue. For example, in order to protect viability of cells and tissue, and get better therapeutic effect, it's necessary to control the output power varied at 5mW 10mW range, while the optimized power is still not clear. This study aimed to quantitatively optimize the laser output power, wavelength, and irradiation direction with highly realistic modeling of light transport in acupunctured tissue. A Monte Carlo Simulation software for 3D vowelized media and the highest-precision human anatomical model Visible Chinese Human (VCH) were employed. Our 3D simulation results showed that longer wavelength/higher illumination power, larger absorption in laser acupuncture; the vertical direction emission of the acupuncture laser results in higher amount of light absorption in both the acupunctured voxel of tissue and muscle layer. Our 3D light distribution of laser acupuncture within VCH tissue model is potential to be used in optimization and real time guidance in clinical manipulation of laser acupuncture.

  18. 3D range-modulator for scanned particle therapy: development, Monte Carlo simulations and experimental evaluation

    NASA Astrophysics Data System (ADS)

    Simeonov, Yuri; Weber, Uli; Penchev, Petar; Printz Ringbæk, Toke; Schuy, Christoph; Brons, Stephan; Engenhart-Cabillic, Rita; Bliedtner, Jens; Zink, Klemens

    2017-09-01

    The purpose of this work was to design and manufacture a 3D range-modulator for scanned particle therapy. The modulator is intended to create a highly conformal dose distribution with only one fixed energy, simultaneously reducing considerably the treatment time. As a proof of concept, a 3D range-modulator was developed for a spherical target volume with a diameter of 5 cm, placed at a depth of 25 cm in a water phantom. It consists of a large number of thin pins with a well-defined shape and different lengths to modulate the necessary shift of the Bragg peak. The 3D range-modulator was manufactured with a rapid prototyping technique. The FLUKA Monte Carlo package was used to simulate the modulating effect of the 3D range-modulator and the resulting dose distribution. For that purpose, a special user routine was implemented to handle its complex geometrical contour. Additionally, FLUKA was extended with the capability of intensity modulated scanning. To validate the simulation results, dose measurements were carried out at the Heidelberg Ion Beam Therapy Center with a 400.41 MeV/u 12C beam. The high resolution dosimetric measurements show a good agreement between simulated and measured dose distributions. Irradiation of the monoenergetic raster plan took 3 s, which is approximately 20 times shorter than a comparable plan with 16 different energies. The combination of only one energy and a 3D range-modulator leads to a tremendous decrease in irradiation time. ‘Interplay effects’, typical for moving targets and pencil beam scanning, can be immensely reduced or disappear completely, making the delivery of a homogeneous dose to moving targets more reliable. Combining high dose conformity, very good homogeneity and extremely short irradiation times, the 3D range-modulator is considered to become a clinically applicable method for very fast treatment of lung tumours.

  19. Improvement of 3d Monte Carlo Localization Using a Depth Camera and Terrestrial Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kanai, S.; Hatakeyama, R.; Date, H.

    2015-05-01

    Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL) has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS) or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  20. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  1. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  2. BioShell-Threading: versatile Monte Carlo package for protein 3D threading.

    PubMed

    Gniewek, Pawel; Kolinski, Andrzej; Kloczkowski, Andrzej; Gront, Dominik

    2014-01-20

    The comparative modeling approach to protein structure prediction inherently relies on a template structure. Before building a model such a template protein has to be found and aligned with the query sequence. Any error made on this stage may dramatically affects the quality of result. There is a need, therefore, to develop accurate and sensitive alignment protocols. BioShell threading software is a versatile tool for aligning protein structures, protein sequences or sequence profiles and query sequences to a template structures. The software is also capable of sub-optimal alignment generation. It can be executed as an application from the UNIX command line, or as a set of Java classes called from a script or a Java application. The implemented Monte Carlo search engine greatly facilitates the development and benchmarking of new alignment scoring schemes even when the functions exhibit non-deterministic polynomial-time complexity. Numerical experiments indicate that the new threading application offers template detection abilities and provides much better alignments than other methods. The package along with documentation and examples is available at: http://bioshell.pl/threading3d.

  3. Monte Carlo study of a 3D Compton imaging device with GEANT4

    NASA Astrophysics Data System (ADS)

    Lenti, M.; Veltri, M.

    2011-10-01

    In this paper we investigate, with a detailed Monte Carlo simulation based on Geant4, the novel approach of Lenti (2008) [1] to 3D imaging with photon scattering. A monochromatic and well collimated gamma beam is used to illuminate the object to be imaged and the photons Compton scattered are detected by means of a surrounding germanium strip detector. The impact position and the energy of the photons are measured with high precision and the scattering position along the beam axis is calculated. We study as an application of this technique the case of brain imaging but the results can be applied as well to situations where a lighter object, with localized variations of density, is embedded in a denser container. We report here the attainable sensitivity in the detection of density variations as a function of the beam energy, the depth inside the object and size and density of the inclusions. Using a 600 keV gamma beam, for an inclusion with a density increase of 30% with respect to the surrounding tissue and thickness along the beam of 5 mm, we obtain at midbrain position a resolution of about 2 mm and a contrast of 12%. In addition the simulation indicates that for the same gamma beam energy a complete brain scan would result in an effective dose of about 1 mSv.

  4. RayXpert V1: 3D software for the gamma dose rate calculation by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Peyrard, P. F.; Pourrouquet, P.; Dossat, C.; Thomas, J. C.; Chatry, N.; Lavielle, D.; Chatry, C.

    2014-06-01

    RayXpert has been developed to ease the access to the power and accuracy of the 3D Monte Carlo method in the field of gamma dose rate estimate. Optimization methods have been implemented to address dose calculation behind thick 3D structures. At the same time, the engineering interface makes all the preprocessing tasks (modeling, material settings,…) faster using predefined tables and push button features.

  5. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  6. Monte carlo simulation of 3-D buffered Ca(2+) diffusion in neuroendocrine cells.

    PubMed Central

    Gil, A; Segura, J; Pertusa, J A; Soria, B

    2000-01-01

    Buffered Ca(2+) diffusion in the cytosol of neuroendocrine cells is a plausible explanation for the slowness and latency in the secretion of hormones. We have developed a Monte Carlo simulation to treat the problem of 3-D diffusion and kinetic reactions of ions and buffers. The 3-D diffusion is modeled as a random walk process that follows the path of each ion and buffer molecule, combined locally with a stochastic treatment of the first-order kinetic reactions involved. Such modeling is able to predict [Ca(2+)] and buffer concentration time courses regardless of how low the calcium influx is, and it is therefore a convenient method for dealing with physiological calcium currents and concentrations. We study the effects of the diffusional and kinetic parameters of the model on the concentration time courses as well as on the local equilibrium of buffers with calcium. An in-mobile and fast endogenous buffer as described by, Biophys. J. 72:674-690) was able to reach local equilibrium with calcium; however, the exogenous buffers considered are displaced drastically from equilibrium at the start of the calcium pulse, particularly below the pores. The versatility of the method also allows the effect of different arrangements of calcium channels on submembrane gradients to be studied, including random distribution of calcium channels and channel clusters. The simulation shows how the particular distribution of channels or clusters can be of relevance for secretion in the case where the distribution of release granules is correlated with the channels or clusters. PMID:10620270

  7. Monte Carlo based analysis of confocal peak extraction uncertainty

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Liu, Yan; Zheng, Tingting; Tan, Jiubin; Liu, Jian

    2017-10-01

    Localisation of axial peaks is essential for height determination in confocal microscopy. Several algorithms have been proposed for reliable height extraction in surface topography measurements. However, most of these algorithms use nonlinear processing, which precludes estimating the peak height uncertainty. A Monte Carlo based standard uncertainty analysis model is developed here to evaluate the precision of height extraction algorithms. The key parameters of this model are the vertical sampling deviation and the size of the scanning pitch. Height extraction uncertainty of the centroid algorithm and nonlinear fitting algorithms were calculated using simulations. Our results offer a reference for selecting algorithms for confocal metrology.

  8. Influences of 3D PET scanner components on increased scatter evaluated by a Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Hirano, Yoshiyuki; Koshino, Kazuhiro; Iida, Hidehiro

    2017-05-01

    Monte Carlo simulation is widely applied to evaluate the performance of three-dimensional positron emission tomography (3D-PET). For accurate scatter simulations, all components that generate scatter need to be taken into account. The aim of this work was to identify the components that influence scatter. The simulated geometries of a PET scanner were: a precisely reproduced configuration including all of the components; a configuration with the bed, the tunnel and shields; a configuration with the bed and shields; and the simplest geometry with only the bed. We measured and simulated the scatter fraction using two different set-ups: (1) as prescribed by NEMA-NU 2007 and (2) a similar set-up but with a shorter line source, so that all activity was contained only inside the field-of-view (FOV), in order to reduce influences of components outside the FOV. The scatter fractions for the two experimental set-ups were, respectively, 45% and 38%. Regarding the geometrical configurations, the former two configurations gave simulation results in good agreement with the experimental results, but simulation results of the simplest geometry were significantly different at the edge of the FOV. From the simulation of the precise configuration, the object (scatter phantom) was the source of more than 90% of the scatter. This was also confirmed by visualization of photon trajectories. Then, the bed and the tunnel were mainly the sources of the rest of the scatter. From the simulation results, we concluded that the precise construction was not needed; the shields, the tunnel, the bed and the object were sufficient for accurate scatter simulations.

  9. Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124.

    PubMed

    Moreau, M; Buvat, I; Ammour, L; Chouin, N; Kraeber-Bodéré, F; Chérel, M; Carlier, T

    2015-03-21

    Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.

  10. Canopy polarized BRDF simulation based on non-stationary Monte Carlo 3-D vector RT modeling

    NASA Astrophysics Data System (ADS)

    Kallel, Abdelaziz; Gastellu-Etchegorry, Jean Philippe

    2017-03-01

    Vector radiative transfer (VRT) has been largely used to simulate polarized reflectance of atmosphere and ocean. However it is still not properly used to describe vegetation cover polarized reflectance. In this study, we try to propose a 3-D VRT model based on a modified Monte Carlo (MC) forward ray tracing simulation to analyze vegetation canopy reflectance. Two kinds of leaf scattering are taken into account: (i) Lambertian diffuse reflectance and transmittance and (ii) specular reflection. A new method to estimate the condition on leaf orientation to produce reflection is proposed, and its probability to occur, Pl,max, is computed. It is then shown that Pl,max is low, but when reflection happens, the corresponding radiance Stokes vector, Io, is very high. Such a phenomenon dramatically increases the MC variance and yields to an irregular reflectance distribution function. For better regularization, we propose a non-stationary MC approach that simulates reflection for each sunny leaf assuming that its orientation is randomly chosen according to its angular distribution. It is shown in this case that the average canopy reflection is proportional to Pl,max ·Io which produces a smooth distribution. Two experiments are conducted: (i) assuming leaf light polarization is only due to the Fresnel reflection and (ii) the general polarization case. In the former experiment, our results confirm that in the forward direction, canopy polarizes horizontally light. In addition, they show that in inclined forward direction, diagonal polarization can be observed. In the latter experiment, polarization is produced in all orientations. It is particularly pointed out that specular polarization explains just a part of the forward polarization. Diffuse scattering polarizes light horizontally and vertically in forward and backward directions, respectively. Weak circular polarization signal is also observed near the backscattering direction. Finally, validation of the non

  11. Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system

    NASA Astrophysics Data System (ADS)

    Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph

    2016-03-01

    A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.

  12. Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124

    NASA Astrophysics Data System (ADS)

    Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.

    2015-03-01

    Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.

  13. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations

    NASA Astrophysics Data System (ADS)

    Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.

    2010-11-01

    We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.

  14. A Monte Carlo Dispersion Analysis of the X-33 Simulation Software

    NASA Technical Reports Server (NTRS)

    Williams, Peggy S.

    2001-01-01

    A Monte Carlo dispersion analysis has been completed on the X-33 software simulation. The simulation is based on a preliminary version of the software and is primarily used in an effort to define and refine how a Monte Carlo dispersion analysis would have been done on the final flight-ready version of the software. This report gives an overview of the processes used in the implementation of the dispersions and describes the methods used to accomplish the Monte Carlo analysis. Selected results from 1000 Monte Carlo runs are presented with suggestions for improvements in future work.

  15. Development of an accurate 3D Monte Carlo broadband atmospheric radiative transfer model

    NASA Astrophysics Data System (ADS)

    Jones, Alexandra L.

    Radiation is the ultimate source of energy that drives our weather and climate. It is also the fundamental quantity detected by satellite sensors from which earth's properties are inferred. Radiative energy from the sun and emitted from the earth and atmosphere is redistributed by clouds in one of their most important roles in the atmosphere. Without accurately representing these interactions we greatly decrease our ability to successfully predict climate change, weather patterns, and to observe our environment from space. The remote sensing algorithms and dynamic models used to study and observe earth's atmosphere all parameterize radiative transfer with approximations that reduce or neglect horizontal variation of the radiation field, even in the presence of clouds. Despite having complete knowledge of the underlying physics at work, these approximations persist due to perceived computational expense. In the current context of high resolution modeling and remote sensing observations of clouds, from shallow cumulus to deep convective clouds, and given our ever advancing technological capabilities, these approximations have been exposed as inappropriate in many situations. This presents a need for accurate 3D spectral and broadband radiative transfer models to provide bounds on the interactions between clouds and radiation to judge the accuracy of similar but less expensive models and to aid in new parameterizations that take into account 3D effects when coupled to dynamic models of the atmosphere. Developing such a state of the art model based on the open source, object-oriented framework of the I3RC Monte Carlo Community Radiative Transfer ("IMC-original") Model is the task at hand. It has involved incorporating (1) thermal emission sources of radiation ("IMC+emission model"), allowing it to address remote sensing problems involving scattering of light emitted at earthly temperatures as well as spectral cooling rates, (2) spectral integration across an arbitrary

  16. Monte Carlo Test Assembly for Item Pool Analysis and Extension

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2005-01-01

    A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…

  17. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  18. T-Opt: A 3D Monte Carlo simulation for light delivery design in photodynamic therapy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Honda, Norihiro; Hazama, Hisanao; Awazu, Kunio

    2017-02-01

    The interstitial photodynamic therapy (iPDT) with 5-aminolevulinic acid (5-ALA) is a safe and feasible treatment modality of malignant glioblastoma. In order to cover the tumour volume, the exact position of the light diffusers within the lesion is needed to decide precisely. The aim of this study is the development of evaluation method of treatment volume with 3D Monte Carlo simulation for iPDT using 5-ALA. Monte Carlo simulations of fluence rate were performed using the optical properties of the brain tissue infiltrated by tumor cells and normal tissue. 3-D Monte Carlo simulation was used to calculate the position of the light diffusers within the lesion and light transport. The fluence rate near the diffuser was maximum and decreased exponentially with distance. The simulation can calculate the amount of singlet oxygen generated by PDT. In order to increase the accuracy of simulation results, the parameter for simulation includes the quantum yield of singlet oxygen generation, the accumulated concentration of photosensitizer within tissue, fluence rate, molar extinction coefficient at the wavelength of excitation light. The simulation is useful for evaluation of treatment region of iPDT with 5-ALA.

  19. Direct simulation Monte Carlo analysis on parallel processors

    NASA Technical Reports Server (NTRS)

    Wilmoth, Richard G.

    1989-01-01

    A method is presented for executing a direct simulation Monte Carlo (DSMC) analysis using parallel processing. The method is based on using domain decomposition to distribute the work load among multiple processors, and the DSMC analysis is performed completely in parallel. Message passing is used to transfer molecules between processors and to provide the synchronization necessary for the correct physical simulation. Benchmark problems are described for testing the method and results are presented which demonstrate the performance on two commercially available multicomputers. The results show that reasonable parallel speedup and efficiency can be obtained if the problem is properly sized to the number of processors. It is projected that with a massively parallel system, performance exceeding that of current supercomputers is possible.

  20. Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core

    NASA Astrophysics Data System (ADS)

    Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier

    2014-06-01

    As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard

  1. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael

    2007-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  2. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger

    2008-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  3. Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques

    SciTech Connect

    2014-12-01

    Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random number and for measuring the time of simulation.

  4. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    SciTech Connect

    Cullen, D.E

    2000-11-22

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  5. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  6. Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code

    SciTech Connect

    Blaise, P.; Colomba, A.

    2012-07-01

    The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)

  7. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry.

    PubMed

    Adamson, Justus; Newton, Joseph; Yang, Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana

    2012-07-01

    To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T&O) applicator using Monte Carlo calculation and 3D dosimetry. For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 × 10(9) photon histories from a (137)Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for (137)Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE(®) dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5° increments, and a 3D distribution was reconstructed with a (0.05 cm)(3) isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T&O implant plan. The systematic difference in bucket angle relative to the nominal ovoid angle (105°) was 3.1°-4.7°. A systematic difference in bucket angle of 1°, 5°, and 10° caused a 1% ± 0.1%, 1.7% ± 0.4%, and 2.6% ± 0.7% increase in rectal dose, respectively, with smaller effect to dose to Point A, bladder

  8. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry

    SciTech Connect

    Adamson, Justus; Newton, Joseph; Yang Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana

    2012-07-15

    Purpose: To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T and O) applicator using Monte Carlo calculation and 3D dosimetry. Methods: For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 Multiplication-Sign 10{sup 9} photon histories from a {sup 137}Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for {sup 137}Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE{sup Registered-Sign} dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5 Degree-Sign increments, and a 3D distribution was reconstructed with a (0.05 cm){sup 3} isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T and O implant plan. Results: The systematic difference in bucket angle relative to the nominal ovoid angle (105 Degree-Sign ) was 3.1 Degree-Sign -4.7 Degree-Sign . A systematic difference in bucket angle of 1 Degree-Sign , 5 Degree-Sign , and

  9. A Monte Carlo model for 3D grain evolution during welding

    DOE PAGES

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-08-04

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bezier curves, which allow formore » the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. Furthermore, the model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.« less

  10. A Monte Carlo model for 3D grain evolution during welding

    NASA Astrophysics Data System (ADS)

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-09-01

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.

  11. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  12. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  13. Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2012-06-15

    Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.

  14. Monte Carlo Simulation of rainfall hyetographs for analysis and design

    NASA Astrophysics Data System (ADS)

    Kottegoda, N. T.; Natale, L.; Raiteri, E.

    2014-11-01

    Observations of high intensity rainfalls have been recorded at gauging stations in many parts of the world. In some instances the resulting data sets may not be sufficient in their scope and variability for purposes of analysis or design. By directly incorporating statistical properties of hyetographs with respect to the number of events per year, storm duration, peak intensity, cumulative rainfall and rising and falling limbs we develop a fundamentally basic procedure for Monte Carlo Simulation. Rainfall from Pavia and Milano in Lombardia region and from five gauging stations in the Piemonte region of northern Italy are used in this study. Firstly, we compare the hydrologic output from our model with that from other design storm methods for validation. Secondly, depth-duration-frequency curves are obtained from historical data and corresponding functions from simulated data are compared for further validation of the procedure. By adopting this original procedure one can simulate an unlimited range of realistic hydrographs that can be used in risk assessment. The potential for extension to ungauged catchments is shown.

  15. Fast Monte Carlo for ion beam analysis simulations

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François

    2008-04-01

    A Monte Carlo program for the simulation of ion beam analysis data is presented. It combines mainly four features: (i) ion slowdown is computed separately from the main scattering/recoil event, which is directed towards the detector. (ii) A virtual detector, that is, a detector larger than the actual one can be used, followed by trajectory correction. (iii) For each collision during ion slowdown, scattering angle components are extracted form tables. (iv) Tables of scattering angle components, stopping power and energy straggling are indexed using the binary representation of floating point numbers, which allows logarithmic distribution of these tables without the computation of logarithms to access them. Tables are sufficiently fine-grained that interpolation is not necessary. Ion slowdown computation thus avoids trigonometric, inverse and transcendental function calls and, as much as possible, divisions. All these improvements make possible the computation of 107 collisions/s on current PCs. Results for transmitted ions of several masses in various substrates are well comparable to those obtained using SRIM-2006 in terms of both angular and energy distributions, as long as a sufficiently large number of collisions is considered for each ion. Examples of simulated spectrum show good agreement with experimental data, although a large detector rather than the virtual detector has to be used to properly simulate background signals that are due to plural collisions. The program, written in standard C, is open-source and distributed under the terms of the GNU General Public License.

  16. Time series analysis of Monte Carlo neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Nease, Brian Robert

    A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.

  17. Monte Carlo Simulations for Likelihood Analysis of the PEN experiment

    NASA Astrophysics Data System (ADS)

    Glaser, Charles; PEN Collaboration

    2017-01-01

    The PEN collaboration performed a precision measurement of the π+ ->e+νe(γ) branching ratio with the goal of obtaining a relative uncertainty of 5 ×10-4 or better at the Paul Scherrer Institute. A precision measurement of the branching ratio Γ(π -> e ν (γ)) / Γ(π -> μ ν (γ)) can be used to give mass bounds on ``new'', or non V -A, particles and interactions. This ratio also proves to be one of the most sensitive tests for lepton universality. The PEN detector consists of beam counters, an active target, a mini-time projection chamber, multi-wire proportional chamber, a plastic scintillating hodoscope, and a CsI electromagnetic calorimeter. The Geant4 Monte Carlo simulation is used to construct ultra-realistic events by digitizing energies and times, creating synthetic target waveforms, and fully accounting for photo-electron statistics. We focus on the detailed detector response to specific decay and background processes in order to sharpen the discrimination between them in the data analysis. Work supported by NSF grants PHY-0970013, 1307328, and others.

  18. Spectrum simulation of rough and nanostructured targets from their 2D and 3D image by Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François; Chicoine, Martin

    2016-03-01

    Corteo is a program that implements Monte Carlo (MC) method to simulate ion beam analysis (IBA) spectra of several techniques by following the ions trajectory until a sufficiently large fraction of them reach the detector to generate a spectrum. Hence, it fully accounts for effects such as multiple scattering (MS). Here, a version of Corteo is presented where the target can be a 2D or 3D image. This image can be derived from micrographs where the different compounds are identified, therefore bringing extra information into the solution of an IBA spectrum, and potentially significantly constraining the solution. The image intrinsically includes many details such as the actual surface or interfacial roughness, or actual nanostructures shape and distribution. This can for example lead to the unambiguous identification of structures stoichiometry in a layer, or at least to better constraints on their composition. Because MC computes in details the trajectory of the ions, it simulates accurately many of its aspects such as ions coming back into the target after leaving it (re-entry), as well as going through a variety of nanostructures shapes and orientations. We show how, for example, as the ions angle of incidence becomes shallower than the inclination distribution of a rough surface, this process tends to make the effective roughness smaller in a comparable 1D simulation (i.e. narrower thickness distribution in a comparable slab simulation). Also, in ordered nanostructures, target re-entry can lead to replications of a peak in a spectrum. In addition, bitmap description of the target can be used to simulate depth profiles such as those resulting from ion implantation, diffusion, and intermixing. Other improvements to Corteo include the possibility to interpolate the cross-section in angle-energy tables, and the generation of energy-depth maps.

  19. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  20. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    NASA Astrophysics Data System (ADS)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum

  1. Estimation of Compton imager using single 3D position-sensitive LYSO scintillator: Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Lee, Taewoong; Lee, Hyounggun; Kim, Younghak; Lee, Wonho

    2017-07-01

    The performance of a Compton imager using a single three-dimensional position-sensitive LYSO scintillator detector was estimated using a Monte Carlo simulation. The Compton imager consisted of a single LYSO scintillator with a pixelized structure. The size of the scintillator and each pixel were 1.3 × 1.3 × 1.3 cm3 and 0.3 × 0.3 × 0.3 cm3, respectively. The order of γ-ray interactions was determined based on the deposited energies in each detector. After the determination of the interaction sequence, various types of reconstruction algorithms such as simple back-projection, filtered back-projection, and list-mode maximum-likelihood expectation maximization (LM-MLEM) were applied and compared with each other in terms of their angular resolution and signal-to-noise ratio (SNR) for several γ-ray energies. The LM-MLEM reconstruction algorithm exhibited the best performance for Compton imaging in maintaining high angular resolution and SNR. The two sources of 137Cs (662 keV) could be distinguishable if they were more than 17° apart. The reconstructed Compton images showed the precise position and distribution of various radiation isotopes, which demonstrated the feasibility of the monitoring of nuclear materials in homeland security and radioactive waste management applications.

  2. Analysis of hysteretic spin transition and size effect in 3D spin crossover compounds investigated by Monte Carlo Entropic sampling technique in the framework of the Ising-type model

    NASA Astrophysics Data System (ADS)

    Chiruta, D.; Linares, J.; Dahoo, P. R.; Dimian, M.

    2015-02-01

    In spin crossover (SCO) systems, the shape of the hysteresis curves are closely related to the interactions between the molecules, which these play an important role in the response of the system to an external parameter. The effects of short-range interactions on the different shape of the spin transition phenomena were investigated. In this contribution we solve the corresponding Hamiltonian for a three-dimensional SCO system taking into account short-range and long-range interaction using a biased Monte Carlo entropic sampling technique and a semi-analytical method. We discuss the competition between the two interactions which governs the low spin (LS) - high spin (HS) process for a three-dimensional network and the cooperative effects. We demonstrate a strong correlation between the shape of the transition and the strength of short-range interaction between molecules and we identified the role of the size for SCO systems.

  3. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  4. Uncertainty analysis for fluorescence tomography with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Reinbacher-Köstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann

    2011-07-01

    Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.

  5. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  6. Deterministic sensitivity analysis for first-order Monte Carlo simulations: a technical note.

    PubMed

    Geisler, Benjamin P; Siebert, Uwe; Gazelle, G Scott; Cohen, David J; Göhler, Alexander

    2009-01-01

    Monte Carlo microsimulations have gained increasing popularity in decision-analytic modeling because they can incorporate discrete events. Although deterministic sensitivity analyses are essential for interpretation of results, it remains difficult to combine these alongside Monte Carlo simulations in standard modeling packages without enormous time investment. Our purpose was to facilitate one-way deterministic sensitivity analysis of TreeAge Markov state-transition models requiring first-order Monte Carlo simulations. Using TreeAge Pro Suite 2007 and Microsoft Visual Basic for EXCEL, we constructed a generic script that enables one to perform automated deterministic one-way sensitivity analyses in EXCEL employing microsimulation models. In addition, we constructed a generic EXCEL-worksheet that allows for use of the script with little programming knowledge. Linking TreeAge Pro Suite 2007 and Visual Basic enables the performance of deterministic sensitivity analyses of first-order Monte Carlo simulations. There are other potentially interesting applications for automated analysis.

  7. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  8. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-01

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. We estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc-Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range of fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc-Zn. The recently generated pseudopotentials of Krogel et al. [Phys. Rev. B 93, 075143 (2016)] reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. [J. Chem. Phys. 129, 164115 (2008)] by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. For the Sc-Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.

  9. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    SciTech Connect

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  10. Energy-consistent small-core pseudopotentials for 3d-transition metals adapted to quantum Monte Carlo calculations.

    PubMed

    Burkatzki, M; Filippi, Claudia; Dolg, M

    2008-10-28

    We extend our recently published set of energy-consistent scalar-relativistic Hartree-Fock pseudopotentials by the 3d-transition metal elements, scandium through zinc. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The pseudopotentials and the accompanying basis sets (VnZ with n=T,Q) are given in standard Gaussian representation and their parameter sets are presented. Coupled cluster, configuration interaction, and QMC studies are carried out for the scandium and titanium atoms and their oxides, demonstrating the good performance of the pseudopotentials. Even though the choice of pseudopotential form is motivated by QMC, these pseudopotentials can also be employed in other quantum chemical approaches.

  11. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  12. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  13. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  14. Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations

    NASA Astrophysics Data System (ADS)

    Malladi, Mayank

    Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are

  15. Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis

    SciTech Connect

    Heo, W.; Kim, W.; Kim, Y.; Yun, S.

    2013-07-01

    A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)

  16. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  17. Active neutron multiplicity analysis and Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Krick, M. S.; Ensslin, N.; Langner, D. G.; Miller, M. C.; Siebelist, R.; Stewart, J. E.; Ceo, R. N.; May, P. K.; Collins, L. L., Jr.

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined.

  18. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  19. Monte Carlo entropic sampling applied to Ising-like model for 2D and 3D systems

    NASA Astrophysics Data System (ADS)

    Jureschi, C. M.; Linares, J.; Dahoo, P. R.; Alayli, Y.

    2016-08-01

    In this paper we present the Monte Carlo entropic sampling (MCES) applied to an Ising-like model for 2D and 3D system in order to show the interaction influence of the edge molecules of the system with their local environment. We show that, as for the 1D and the 2D spin crossover (SCO) systems, the origin of multi steps transition in 3D SCO is the effect of the edge interaction molecules with its local environment together with short and long range interactions. Another important result worth noting is the co-existence of step transitions with hysteresis and without hysteresis. By increasing the value of the edge interaction, L, the transition is shifted to the lower temperatures: it means that the role of edge interaction is equivalent to an applied negative pressure because the edge interaction favours the HS state while the applied pressure favours the LS state. We also analyse, in this contribution, the role of the short- and long-range interaction, J respectively G, with respect to the environment interaction, L.

  20. Scaling/LER study of Si GAA nanowire FET using 3D finite element Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Elmessary, Muhammad A.; Nagy, Daniel; Aldegunde, Manuel; Seoane, Natalia; Indalecio, Guillermo; Lindberg, Jari; Dettmer, Wulf; Perić, Djordje; García-Loureiro, Antonio J.; Kalna, Karol

    2017-02-01

    3D Finite Element (FE) Monte Carlo (MC) simulation toolbox incorporating 2D Schrödinger equation quantum corrections is employed to simulate ID-VG characteristics of a 22 nm gate length gate-all-around (GAA) Si nanowire (NW) FET demonstrating an excellent agreement against experimental data at both low and high drain biases. We then scale the Si GAA NW according to the ITRS specifications to a gate length of 10 nm predicting that the NW FET will deliver the required on-current of above 1 mA/ μ m and a superior electrostatic integrity with a nearly ideal sub-threshold slope of 68 mV/dec and a DIBL of 39 mV/V. In addition, we use a calibrated 3D FE quantum corrected drift-diffusion (DD) toolbox to investigate the effects of NW line-edge roughness (LER) induced variability on the sub-threshold characteristics (threshold voltage (VT), OFF-current (IOFF), sub-threshold slope (SS) and drain-induced-barrier-lowering (DIBL)) for the 22 nm and 10 nm gate length GAA NW FETs at low and high drain biases. We simulate variability with two LER correlation lengths (CL = 20 nm and 10 nm) and three root mean square values (RMS = 0.6, 0.7 and 0.85 nm).

  1. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    PubMed Central

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-01-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477

  2. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  3. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  4. 3D Monte-Carlo study of toroidally discontinuous limiter SOL configurations of Aditya tokamak

    NASA Astrophysics Data System (ADS)

    Sahoo, Bibhu Prasad; Sharma, Devendra; Jha, Ratneshwar; Feng, Yühe

    2017-08-01

    The plasma-neutral transport in the scrape-off layer (SOL) region formed by toroidally discontinuous limiters deviates from usual uniform SOL approximations when 3D effects caused by limiter discreteness begin to dominate. In an upgrade version of the Aditya tokamak, originally having a toroidally localized poloidal ring-like limiter, the newer outboard block and inboard belt limiters are expected to have smaller connection lengths and a multiple fold toroidal periodicity. The characteristics of plasma discharges may accordingly vary from the original observations of large diffusivity, and a net improvement and the stability of the discharges are desired. The estimations related to 3D effects in the ring limiter plasma transport are also expected to be modified and are updated by predictive simulations of transport in the new block limiter configuration. A comparison between the ring limiter results and those from new simulations with block limiter SOL shows that for the grids produced using same core plasma equilibrium, the modified SOL plasma flows and flux components have enhanced poloidal periodicity in the block limiter case. These SOL modifications result in a reduced net recycling for the equivalent edge density values. Predictions are also made about the relative level of the diffusive transport and its impact on the factors limiting the operational regime.

  5. 3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.

    PubMed

    Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E

    2016-05-01

    The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. SU-E-T-35: An Investigation of the Accuracy of Cervical IMRT Dose Distribution Using 2D/3D Ionization Chamber Arrays System and Monte Carlo Simulation

    SciTech Connect

    Zhang, Y; Yang, J; Liu, H; Liu, D

    2014-06-01

    Purpose: The purpose of this work is to compare the verification results of three solutions (2D/3D ionization chamber arrays measurement and Monte Carlo simulation), the results will help make a clinical decision as how to do our cervical IMRT verification. Methods: Seven cervical cases were planned with Pinnacle 8.0m to meet the clinical acceptance criteria. The plans were recalculated in the Matrixx and Delta4 phantom with the accurate plans parameters. The plans were also recalculated by Monte Carlo using leaf sequences and MUs for individual plans of every patient, Matrixx and Delta4 phantom. All plans of Matrixx and Delta4 phantom were delivered and measured. The dose distribution of iso slice, dose profiles, gamma maps of every beam were used to evaluate the agreement. Dose-volume histograms were also compared. Results: The dose distribution of iso slice and dose profiles from Pinnacle calculation were in agreement with the Monte Carlo simulation, Matrixx and Delta4 measurement. A 95.2%/91.3% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Pinnacle distributions within 3mm/3% gamma criteria. A 96.4%/95.6% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Monte Carlo simulation within 2mm/2% gamma criteria, almost 100% gamma pass ratio within 3mm/3% gamma criteria. The DVH plot have slightly differences between Pinnacle and Delta4 measurement as well as Pinnacle and Monte Carlo simulation, but have excellent agreement between Delta4 measurement and Monte Carlo simulation. Conclusion: It was shown that Matrixx/Delta4 and Monte Carlo simulation can be used very efficiently to verify cervical IMRT delivery. In terms of Gamma value the pass ratio of Matrixx was little higher, however, Delta4 showed more problem fields. The primary advantage of Delta4 is the fact it can measure true 3D dosimetry while Monte Carlo can simulate in patients CT images but not in phantom.

  7. Quantification of stochastic uncertainty propagation for Monte Carlo depletion methods in reactor analysis

    NASA Astrophysics Data System (ADS)

    Newell, Quentin Thomas

    The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D fuel depletion analyses to compute quantities of interest in spent nuclear fuel including isotopic compositions. The Monte Carlo approach has not been fully embraced due to unresolved issues concerning the effect of Monte Carlo uncertainties on the predicted results. Use of the Monte Carlo method to solve the neutron transport equation introduces stochastic uncertainty in the computed fluxes. These fluxes are used to collapse cross sections, estimate power distributions, and deplete the fuel within depletion calculations; therefore, the predicted number densities contain random uncertainties from the Monte Carlo solution. These uncertainties can be compounded in time because of the extrapolative nature of depletion and decay calculations. The objective of this research was to quantify the stochastic uncertainty propagation of the flux uncertainty, introduced by the Monte Carlo method, to the number densities for the different isotopes in spent nuclear fuel due to multiple depletion time steps. The research derived a formula that calculates the standard deviation in the nuclide number densities based on propagating the statistical uncertainty introduced when using coupled Monte Carlo depletion computer codes. The research was developed with the use of the TRITON/KENO sequence of the SCALE computer code. The linear uncertainty nuclide group approximation (LUNGA) method developed in this research approximated the variance of ψN term, which is the variance in the flux shape due to uncertainty in the calculated nuclide number densities. Three different example problems were used in this research to calculate of the standard deviation in the nuclide number densities using the LUNGA method. The example problems showed that the LUNGA method is capable of calculating the standard deviation of the nuclide

  8. Method for Fast CT/SPECT-Based 3D Monte Carlo Absorbed Dose Computations in Internal Emitter Therapy

    NASA Astrophysics Data System (ADS)

    Wilderman, S. J.; Dewaraja, Y. K.

    2007-02-01

    The DPM (Dose Planning Method) Monte Carlo electron and photon transport program, designed for fast computation of radiation absorbed dose in external beam radiotherapy, has been adapted to the calculation of absorbed dose in patient-specific internal emitter therapy. Because both its photon and electron transport mechanics algorithms have been optimized for fast computation in 3D voxelized geometries (in particular, those derived from CT scans), DPM is perfectly suited for performing patient-specific absorbed dose calculations in internal emitter therapy. In the updated version of DPM developed for the current work, the necessary inputs are a patient CT image, a registered SPECT image, and any number of registered masks defining regions of interest. DPM has been benchmarked for internal emitter therapy applications by comparing computed absorption fractions for a variety of organs using a Zubal phantom with reference results from the Medical Internal Radionuclide Dose (MIRD) Committee standards. In addition, the beta decay source algorithm and the photon tracking algorithm of DPM have been further benchmarked by comparison to experimental data. This paper presents a description of the program, the results of the benchmark studies, and some sample computations using patient data from radioimmunotherapy studies using 131I

  9. NEPHTIS: 2D/3D validation elements using MCNP4c and TRIPOLI4 Monte-Carlo codes

    SciTech Connect

    Courau, T.; Girardi, E.

    2006-07-01

    High Temperature Reactors (HTRs) appear as a promising concept for the next generation of nuclear power applications. The CEA, in collaboration with AREVA-NP and EDF, is developing a core modeling tool dedicated to the prismatic block-type reactor. NEPHTIS (Neutronics Process for HTR Innovating System) is a deterministic codes system based on a standard two-steps Transport-Diffusion approach (APOLLO2/CRONOS2). Validation of such deterministic schemes usually relies on Monte-Carlo (MC) codes used as a reference. However, when dealing with large HTR cores the fission source stabilization is rather poor with MC codes. In spite of this, it is shown in this paper that MC simulations may be used as a reference for a wide range of configurations. The first part of the paper is devoted to 2D and 3D MC calculations of a HTR core with control devices. Comparisons between MCNP4c and TRIPOLI4 MC codes are performed and show very consistent results. Finally, the last part of the paper is devoted to the code to code validation of the NEPHTIS deterministic scheme. (authors)

  10. 3D polymer gel dosimetry and Geant4 Monte Carlo characterization of novel needle based X-ray source

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Sozontov, E.; Safronov, V.; Gutman, G.; Strumban, E.; Jiang, Q.; Li, S.

    2010-11-01

    In the recent years, there have been a few attempts to develop a low energy x-ray radiation sources alternative to conventional radioisotopes used in brachytherapy. So far, all efforts have been centered around the intent to design an interstitial miniaturized x-ray tube. Though direct irradiation of tumors looks very promising, the known insertable miniature x-ray tubes have many limitations: (a) difficulties with focusing and steering the electron beam to the target; (b)necessity to cool the target to increase x-ray production efficiency; (c)impracticability to reduce the diameter of the miniaturized x-ray tube below 4mm (the requirement to decrease the diameter of the x-ray tube and the need to have a cooling system for the target have are mutually exclusive); (c) significant limitations in changing shape and energy of the emitted radiation. The specific aim of this study is to demonstrate the feasibility of a new concept for an insertable low-energy needle x-ray device based on simulation with Geant4 Monte Carlo code and to measure the dose rate distribution for low energy (17.5 keV) x-ray radiation with the 3D polymer gel dosimetry.

  11. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    SciTech Connect

    Li, Ming; Kang, Zhan; Huang, Xiaobo

    2015-08-28

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  12. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Li, Ming; Huang, Xiaobo; Kang, Zhan

    2015-08-01

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  13. Dynamical Monte Carlo Simulations of 3-D Galactic Systems in Axisymmetric and Triaxial Potentials

    NASA Astrophysics Data System (ADS)

    Taani, Ali; Vallejo, Juan C.

    2017-06-01

    We describe the dynamical behavior of isolated old ( ⩾ 1Gyr) objects-like Neutron Stars (NSs). These objects are evolved under smooth, time-independent, gravitational potentials, axisymmetric and with a triaxial dark halo. We analysed the geometry of the dynamics and applied the Poincaré section for comparing the influence of different birth velocities. The inspection of the maximal asymptotic Lyapunov (λ) exponent shows that dynamical behaviors of the selected orbits are nearly the same as the regular orbits with 2-DOF, both in axisymmetric and triaxial when (ϕ, qz )= (0,0). Conversely, a few chaotic trajectories are found with a rotated triaxial halo when (ϕ, qz )= (90, 1.5). The tube orbits preserve direction of their circulation around either the long or short axis as appeared in the triaxial potential, even when every initial condition leads to different orientations. The Poincaré section shows that there are 2-D invariant tori and invariant curves (islands) around stable periodic orbits that bound to the surface of 3-D tori. The regularity of several prototypical orbits offer the means to identify the phase-space regions with localized motions and to determine their environment in different models, because they can occupy significant parts of phase-space depending on the potential. This is of particular importance in Galactic Dynamics.

  14. General purpose dynamic Monte Carlo with continuous energy for transient analysis

    SciTech Connect

    Sjenitzer, B. L.; Hoogenboom, J. E.

    2012-07-01

    For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)

  15. A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code

    NASA Astrophysics Data System (ADS)

    Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.

    2013-12-01

    The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness, , and standard deviation of COT, were 3.0 and 4.3 for pristine case and 8.5 and 7.4 for polluted case, respectively. In the MIDPM method, we first construct a library of pair of observed vertical profiles from active sensors and collocated imager products at the nadir footprint, i.e. spectral imager radiances, cloud optical thickness (COT), effective particle radius (RE) and cloud top temperature (Tc). We then select a

  16. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    SciTech Connect

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  17. Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. G.; Normand, E.; Wilcox, A. D.

    1972-01-01

    In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.

  18. Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis

    SciTech Connect

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be made adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.

  19. Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis

    DOE PAGES

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less

  20. Adaptive sequential Monte Carlo for multiple changepoint analysis

    SciTech Connect

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be made adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.

  1. Cluster Analysis as a Method of Recovering Types of Intraindividual Growth Trajectories: A Monte Carlo Study.

    ERIC Educational Resources Information Center

    Dumenci, Levent; Windle, Michael

    2001-01-01

    Used Monte Carlo methods to evaluate the adequacy of cluster analysis to recover group membership based on simulated latent growth curve (LCG) models. Cluster analysis failed to recover growth subtypes adequately when the difference between growth curves was shape only. Discusses circumstances under which it was more successful. (SLD)

  2. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    ERIC Educational Resources Information Center

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  3. Development and Monte Carlo analysis of antiscatter grids for mammography.

    PubMed

    Boone, John M; Makarova, Olga V; Zyryanov, Vladislav N; Tang, Cha-Mei; Mancini, Derrick C; Moldovan, Nikolaie; Divan, Ralu

    2002-12-01

    Mammography arguably demands the highest fidelity of all x-ray imaging applications, with simultaneous requirements of exceedingly high spatial and contrast resolution. Continuing technical improvements of screen-film and digital mammography systems have led to substantial improvements in image quality, and therefore improvements in the performance of anti-scatter grids are required to keep pace with the improvements in other components of the imaging chain. The development of an air-core honeycomb (cellular) grid using x-ray lithography and electroforming techniques is described, and the production of a 60 mm x 60 mm section of grid is reported. A crossed grid was constructed with 25 microm copper septa, and a period of 550 microm. Monte Carlo and numerical simulation methods were used to analyze the theoretical performance of the fabricated grid, and comparisons with other grid systems (Lorad HTC and carbon fiber interspaced grids) were made over a range of grid ratios. The results demonstrate essentially equivalent performance in terms of contrast improvement factor (CIF) and Bucky factor (BF) between Cu and Au honeycomb grids and the Lorad HTC (itself a copper honeycomb grid). Gold septa improved both CIF and BF performance in higher kVp, higher scatter geometries. The selectivity of honeycomb grids was far better than for linear grids, with a factor of approximately 3.9 improvement at a grid ratio of 5.0. It is concluded that using the fabrication methods described, that practical honeycomb grid structures could be produced for use in mammographic imaging, and that a substantial improvement in scatter rejection would be achieved using these devices.

  4. A 3D Monte Carlo model of radiation affecting cells, and its application to neuronal cells and GCR irradiation

    NASA Astrophysics Data System (ADS)

    Ponomarev, Artem; Sundaresan, Alamelu; Kim, Angela; Vazquez, Marcelo E.; Guida, Peter; Kim, Myung-Hee; Cucinotta, Francis A.

    A 3D Monte Carlo model of radiation transport in matter is applied to study the effect of heavy ion radiation on human neuronal cells. Central nervous system effects, including cognitive impairment, are suspected from the heavy ion component of galactic cosmic radiation (GCR) during space missions. The model can count, for instance, the number of direct hits from ions, which will have the most affect on the cells. For comparison, the remote hits, which are received through δ-rays from the projectile traversing space outside the volume of the cell, are also simulated and their contribution is estimated. To simulate tissue effects from irradiation, cellular matrices of neuronal cells, which were derived from confocal microscopy, were simulated in our model. To produce this realistic model of the brain tissue, image segmentation was used to identify cells in the images of cells cultures. The segmented cells were inserted pixel by pixel into the modeled physical space, which represents a volume of interacting cells with periodic boundary conditions (PBCs). PBCs were used to extrapolate the model results to the macroscopic tissue structures. Specific spatial patterns for cell apoptosis are expected from GCR, as heavy ions produce concentrated damage along their trajectories. The apoptotic cell patterns were modeled based on the action cross sections for apoptosis, which were estimated from the available experimental data. The cell patterns were characterized with an autocorrelation function, which values are higher for non-random cell patterns, and the values of the autocorrelation function were compared for X rays and Fe ion irradiations. The autocorrelation function indicates the directionality effects present in apoptotic neuronal cells from GCR.

  5. Commissioning Monte Carlo algorithm for robotic radiosurgery using cylindrical 3D-array with variable density inserts.

    PubMed

    Dechambre, D; Baart, V; Cucchiaro, S; Ernst, C; Jansen, N; Berkovic, P; Mievis, C; Coucke, P; Gulyban, A

    2017-01-01

    To commission the Monte Carlo (MC) algorithm based model of CyberKnife robotic stereotactic system (CK) and evaluate the feasibility of patient specific QA using the ArcCHECK cylindrical 3D-array (AC) with Multiplug inserts (MP). Four configurations were used for simple beam setup and two for patient QA, replacing water equivalent inserts by lung. For twelve collimators (5-60mm) in simple setup, mean (SD) differences between MC and RayTracing algorithm (RT) of the number of points failing the 3%/1mmgamma criteria were 1(1), 1(3), 1(2) and 1(2) for the four MP configurations. Tracking fiducials were placed within AC for patient QA. Single lung insert setup resulted in mean gamma-index 2%/2mm of 90.5% (range [74.3-95.9]) and 82.3% ([66.8-94.5]) for MC and RT respectively, while 93.5% ([86.8-98.2]) and 86.2% ([68.7-95.4]) in presence of largest inhomogeneities, showing significant differences (p<0.05). After evaluating the potential effects, 1.12g/cc PMMA and 0.09g/cc lung material assignment showed the best results. Overall, MC-based model showed superior results compared to RT for simple and patient specific testing, using a 2%/2mm criteria. Results are comparable with other reported commissionings for flattening filter free (FFF) delivery. Further improvement of MC calculation might be challenging as Multiplan has limited material library. The AC with Multiplug allowed for comprehensive commissioning of CyberKnife MC algorithm and is useful for patient specific QA for stereotactic body radiation therapy. MC calculation accuracy might be limited due to Multiplan's insufficient material library; still results are comparable with other reported commissioning measurements using FFF beams. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry.

    PubMed

    Barbeiro, A R; Ureba, A; Baeza, J A; Linares, R; Perucha, M; Jiménez-Ortega, E; Velázquez, S; Mateos, J C; Leal, A

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  7. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry

    PubMed Central

    Barbeiro, A. R.; Ureba, A.; Baeza, J. A.; Linares, R.; Perucha, M.; Jiménez-Ortega, E.; Velázquez, S.; Mateos, J. C.

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  8. Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis

    ERIC Educational Resources Information Center

    Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.

    2010-01-01

    The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…

  9. Kernel Density Estimation Techniques for Monte Carlo Reactor Analysis

    NASA Astrophysics Data System (ADS)

    Burke, Timothy P.

    Kernel density estimators (KDEs) are developed to estimate neutron scalar flux and reaction rate densities in Monte Carlo neutron transport simulations of pressurized water reactor benchmark problems in continuous energy. Previous work introduced the collision and track-length KDE for estimating scalar flux in radiation transport problems as an alternative to traditional histogram tallies. However, these estimators were not developed to estimate reaction rates and they were they not tested in continuous energy reactor physics problems. This dissertation expands upon previous work by developing KDEs that are capable of accurately estimating reaction rates in reactor physics problems. The current state of the art in KDEs is applied to estimate reaction rates in reactor physics problems, with significant bias observed at material interfaces. The Mean Free Path (MFP) KDE is introduced in order to reduce this bias, with results showing no significant bias in 1-D problems. The multivariate MFP KDE is derived and applied to 2-D benchmark problems. Results show that the multivariate MFP KDE produces results with significant variance resulting from particle events at resonance energies. The fractional MFP KDE is developed to reduce this variance. An approximation to the MFP KDE is introduced to improve computational performance of the algorithm at the cost of introducing additional bias into the estimates. A volume-average KDE is derived in order to directly compare KDE and histogram results and is used to determine the bias introduced by the approximation to the MFP KDE. A KDE is derived for cylindrical coordinates, and the cylindrical MFP KDE is derived to capture distributions in reactor pincell problems. The cylindrical MFP KDE is applied to estimate distributions on an IFBA pincell, a quarter assembly of pincells, a depleted pincell, and on an unstructured mesh representation of a pincell. The results indicate that the cylindrical MFP KDE and fractional MFP KDE are

  10. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  11. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  12. Monte Carlo calculation of conversion coefficients for dose estimation in mammography based on a 3D detailed breast model.

    PubMed

    Wang, Wenjing; Qiu, Rui; Ren, Li; Liu, Huan; Wu, Zhen; Li, Chunyan; Niu, Yantao; Li, Junli

    2017-06-01

    At present, the Chinese specification for testing of quality control in x-ray mammography is based on a simple breast model, and does not consider the glandular tissue distribution in the breast. In order to more precisely estimate the mean glandular dose (MGD) in mammography for Chinese women, a three-dimensional (3D) detailed breast model based on realistic structures in the breast and Chinese female breast parameters was built and applied in this study. To characterize the Chinese female breast, Chinese female breast parameters including breast size, compressed breast thickness (CBT), and glandular content were investigated in this study. A mathematical model with the detailed breast structures was constructed based on the Chinese female breast parameters. The mathematical model was then converted to a voxel model with voxels. The voxel model was compressed in craniocaudal (CC) view to obtain a deformation model. The compressed breast model was combined with the Chinese reference adult female whole-body voxel phantom (CRAF) to study the effects of backscatter from the female body. Monte Carlo simulations of the glandular dose in mammography were performed with Geant 4. The glandular tissue dose conversion coefficients for breasts with different glandular contents (5%, 25%, 50%, 75%, and 100% glandularity) and CBTs (3 cm, 4 cm, 5 cm, and 6 cm) were calculated, respectively, at various x-ray tube voltages (25 kV, 28 kV, 30 kV, 32 kV, and 35 kV) for various target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh, and W/Rh). A series of glandular tissue dose conversion coefficients for dose estimation in mammography were calculated. The conversion coefficients calculated in this study were compared with those estimated with the simple breast model. A discrepancy of 5.4-38.0% was observed. This was consistent with the results obtained from the realistic breast models in the literature. A 3D detailed breast model with realistic structures in the breast was constructed

  13. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  14. Performance and accuracy of criticality calculations performed using WARP – A framework for continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs

    DOE PAGES

    Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...

    2017-05-01

    In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less

  15. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  16. Getting the Good Bounce: Techniques for Efficient Monte Carlo Analysis of Complex Reacting Flows

    DTIC Science & Technology

    1983-01-01

    AD-A273 287 GETTING THE GOOD BOUNCE : TECHNIQUES FOR EFFICIENT MONTE CARLO ANALYSIS OF COMPLEX REACTING FLOWS NOV 3 Prepared by James B. Elgin...53 8.3 Sampling of Time Averaged Quantities . 56 9. REFERENCES ................................ 58 iv GETTING THE GOOD BOUNCE : TECHNIQUES FOR...expectation value is proper. That is, sometimes the next lower integer is selected and sometimes the next higher one, with a probability that reflects how

  17. Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  18. Monte Carlo Analysis as a Trajectory Design Driver for the Transiting Exoplanet Survey Satellite (TESS) Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  19. A numerical analysis method for evaluating rod lenses using the Monte Carlo method.

    PubMed

    Yoshida, Shuhei; Horiuchi, Shuma; Ushiyama, Zenta; Yamamoto, Manabu

    2010-12-20

    We propose a numerical analysis method for evaluating GRIN lenses using the Monte Carlo method. Actual measurements of the modulation transfer function (MTF) of a GRIN lens using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. However the results differ greatly from those from experiments. We therefore developed an evaluation method similar to the experimental system based on the Monte Carlo method and verified that it more closely matches the experimental results than the conventional method.

  20. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  1. SEAP: a computer program for the error analysis of a stream model, Monte Carlo methodology and program documentation

    SciTech Connect

    Carney, J.H.; Gardner, R.H.; Mankin, J.B.; O'Neill, R.V.

    1981-03-01

    The effect of uncertainties in ecological models can be systematically studied by Monte Carlo techniques to obtain the uncertainty of model predictions. The Monte Carlo procedure requires a program which generates random parameter values and obtains numerical solutions. This report documents the general procedures used for the Monte Carlo error analysis of a stream model, along with the computer programs and subroutines that have been developed to simplify this task. An example of the results is provided in the appendices with sufficient information given to adapt the methods to other models.

  2. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-11-15

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically.

  3. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  4. Monte Carlo investigation and optimization of coincidence prompt gamma-ray neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Jiaxin; Calderon, Adan; Peeples, Cody R.; Ai, Xianyun; Gardner, Robin P.

    2011-10-01

    Normal Prompt Gamma-Ray Neutron Activation Analysis (PGNAA) suffers from a large inherent noise or background. The coincidence PGNAA approach is being investigated for eliminating almost all of the interfering backgrounds and thereby significantly improving the signal-to-noise ratio (SNR). This can be done since almost all of the prompt gamma rays from elements of interest are emitted in coincidence except hydrogen. However, it has been found previously that while the use of two normal NaI detectors greatly reduces the background, the signal is also greatly reduced so that very little improvement in standard deviation is obtained. With the help of MCNP5, the general-purpose Monte Carlo N-Particle code, and CEARCPG, the specific purpose Monte Carlo code for Coincidence PGNAA, further optimization of the proposed coincidence system is being accomplished. The idea pursued here is the use of a large area plastic scintillation detector as the trigger for coincidence events together with a normal large NaI detector. In this approach the detection solid angle is increased greatly, which directly increases the probability of coincidence detection. The 2D-coincidence spectrum obtained can then be projected to the axis representing the NaI detector to overcome the drawback of low energy resolution and photopeak intensity of the plastic scintillation detector and utilize the overall higher coincidence counting rate. To reach the best coincidence detection, the placement of detectors, sample, and the moderator of the neutron source have been optimized through Monte Carlo simulation.

  5. MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem

    SciTech Connect

    Kelly, D. J.; Sutton, T. M.; Wilson, S. C.

    2012-07-01

    Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)

  6. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  7. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images

    NASA Astrophysics Data System (ADS)

    Botta, F.; Mairani, A.; Hobbs, R. F.; Vergara Gil, A.; Pacilio, M.; Parodi, K.; Cremonesi, M.; Coca Pérez, M. A.; Di Dia, A.; Ferrari, M.; Guerriero, F.; Battistoni, G.; Pedroli, G.; Paganelli, G.; Torres Aroche, L. A.; Sgouros, G.

    2013-11-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3-4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  8. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images.

    PubMed

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2013-11-21

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 10(8) primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  9. Mission Command Analysis Using Monte Carlo Tree Search

    DTIC Science & Technology

    2013-06-14

    Modeling, Virtual Environments, and Simulation NPS Naval Postgraduate School TRAC Training and Doctrine Command Analysis Center TRAC- MRO Training and...Background In the fall of 2012, the Training and Doctrine Command Analysis Center (TRAC) Methods and Research Office (TRAC- MRO ) sponsored the Training and...Sponsor: Mr. Paul Works, TRAC Research Director, MRO . • Project Lead: MAJ Chris Marks (TRAC-MTRY). • Supporting Analyst: LTC John Alt (TRAC-MTRY

  10. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial

  11. Uncertainty optimization applied to the Monte Carlo analysis of planetary entry trajectories

    NASA Astrophysics Data System (ADS)

    Way, David Wesley

    2001-10-01

    Future robotic missions to Mars, as well as any human missions, will require precise entries to ensure safe landings near science objectives and pre-deployed assets. Planning for these missions will depend heavily on Monte Carlo analyses to evaluate active guidance algorithms, assess the impact of off-nominal conditions, and account for uncertainty. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecast output statistics. An improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively. This thesis proposes a methodology to optimize the uncertainties in the Monte Carlo analysis of spacecraft landing footprints. A metamodel is used to first write polynomial expressions for the size of the landing footprint as functions of the independent uncertainty extrema. The coefficients of the metamodel are determined by performing experiments. The metamodel is then used in a constrained optimization procedure to minimize a cost-tolerance function. First, a two-dimensional proof-of-concept problem was used to evaluate the feasibility of this optimization method. Next, the optimization method was further demonstrated on the Mars Surveyor Program 2001 Lander. The purpose of this example was to demonstrate that the methodology developed during the proof-of-concept could be scaled to solve larger, more complicated, "real world" problems. This research has shown that is possible to control the size of the landing footprint and establish tolerances for mission uncertainties. A simplified metamodel was developed, which is enabling for realistic problems with more than just a few uncertainties. A confidence interval on

  12. Microdosimetry of alpha particles for simple and 3D voxelised geometries using MCNPX and Geant4 Monte Carlo codes.

    PubMed

    Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A

    2012-07-01

    Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is <6 %. For voxelised phantom, the study of the voxel size highlighted that the shape of the curve f(1)(z) obtained with MCNPX for <1 µm voxel size presents a significant difference with the shape of non-voxelised geometry. When using Geant4, little differences are observed whatever the voxel size is. Below 1 µm, the use of Geant4 is required. However, the calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.

  13. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    SciTech Connect

    Fallahpoor, M; Abbasi, M; Sen, A; Parach, A; Kalantari, F

    2015-06-15

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  14. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    SciTech Connect

    Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  15. Monte Carlo Neutronics and Thermal Hydraulics Analysis of Reactor Cores with Multilevel Grids

    NASA Astrophysics Data System (ADS)

    Bernnat, W.; Mattes, M.; Guilliard, N.; Lapins, J.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2014-06-01

    Power reactors are composed of assemblies with fuel pin lattices or other repeated structures with several grid levels, which can be modeled in detail by Monte Carlo neutronics codes such as MCNP6 using corresponding lattice options, even for large cores. Except for fresh cores at beginning of life, there is a varying material distribution due to burnup in the different fuel pins. Additionally, for power states the fuel and moderator temperatures and moderator densities vary according to the power distribution and cooling conditions. Therefore, a coupling of the neutronics code with a thermal hydraulics code is necessary. Depending on the level of detail of the analysis, a very large number of cells with different materials and temperatures must be regarded. The assignment of different material properties to all elements of a multilevel grid is very elaborate and may exceed program limits if the standard input procedure is used. Therefore, an internal assignment is used which overrides uniform input parameters. The temperature dependency of continuous energy cross sections, probability tables for the unresolved resonance region and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. The method is applied with MCNP6 and proven for several full core reactor models. For the coupling of MCNP6 with thermal hydraulics appropriate interfaces were developed for the GRS system code ATHLET for liquid coolant and the IKE thermal hydraulics code ATTICA-3D for gaseous coolant. Examples will be shown for different applications for PWRs with square and hexagonal lattices, fast reactors (SFR) with hexagonal lattices and HTRs with pebble bed and prismatic lattices.

  16. The influence of the IMRT QA set-up error on the 2D and 3D gamma evaluation method as obtained by using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk

    2015-11-01

    The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.

  17. Verification and validation of a parallel 3D direct simulation Monte Carlo solver for atmospheric entry applications

    NASA Astrophysics Data System (ADS)

    Nizenkov, Paul; Noeding, Peter; Konopka, Martin; Fasoulas, Stefanos

    2016-07-01

    The in-house direct simulation Monte Carlo solver PICLas, which enables parallel, three-dimensional simulations of rarefied gas flows, is verified and validated. Theoretical aspects of the method and the employed schemes are briefly discussed. Considered cases include simple reservoir simulations and complex re-entry geometries, which were selected from literature and simulated with PICLas. First, the chemistry module is verified using simple numerical and analytical solutions. Second, simulation results of the rarefied gas flow around a 70° blunted-cone, the REX Free-Flyer as well as multiple points of the re-entry trajectory of the Orion capsule are presented in terms of drag and heat flux. A comparison to experimental measurements as well as other numerical results shows an excellent agreement across the different simulation cases. An outlook on future code development and applications is given.

  18. Verification and validation of a parallel 3D direct simulation Monte Carlo solver for atmospheric entry applications

    NASA Astrophysics Data System (ADS)

    Nizenkov, Paul; Noeding, Peter; Konopka, Martin; Fasoulas, Stefanos

    2017-03-01

    The in-house direct simulation Monte Carlo solver PICLas, which enables parallel, three-dimensional simulations of rarefied gas flows, is verified and validated. Theoretical aspects of the method and the employed schemes are briefly discussed. Considered cases include simple reservoir simulations and complex re-entry geometries, which were selected from literature and simulated with PICLas. First, the chemistry module is verified using simple numerical and analytical solutions. Second, simulation results of the rarefied gas flow around a 70° blunted-cone, the REX Free-Flyer as well as multiple points of the re-entry trajectory of the Orion capsule are presented in terms of drag and heat flux. A comparison to experimental measurements as well as other numerical results shows an excellent agreement across the different simulation cases. An outlook on future code development and applications is given.

  19. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results.

  20. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  1. Monte-Carlo Analysis of the Flavour Changing Neutral Current B \\to Gamma at Babar

    SciTech Connect

    Smith, D.

    2001-09-01

    The main theme of this thesis is a Monte-Carlo analysis of the rare Flavour Changing Neutral Current (FCNC) decay b→sγ. The analysis develops techniques that could be applied to real data, to discriminate between signal and background events in order to make a measurement of the branching ratio of this rare decay using the BaBar detector. Also included in this thesis is a description of the BaBar detector and the work I have undertaken in the development of the electronic data acquisition system for the Electromagnetic calorimeter (EMC), a subsystem of the BaBar detector.

  2. A new approach for radiosynoviorthesis: A dose-optimized planning method based on Monte Carlo simulation and synovial measurement using 3D slicer and MRI.

    PubMed

    Torres Berdeguez, Mirta Bárbara; Thomas, Sylvia; Rafful, Patricia; Arruda Sanchez, Tiago; Medeiros Oliveira Ramos, Susie; Souza Albernaz, Marta; Vasconcellos de Sá, Lidia; Lopes de Souza, Sergio Augusto; Mas Milian, Felix; Silva, Ademir Xavier da

    2017-07-01

    Recently, there has been a growing interest in a methodology for dose planning in radiosynoviorthesis to substitute fixed activity. Clinical practice based on fixed activity frequently does not embrace radiopharmaceutical dose optimization in patients. The aim of this paper is to propose and discuss a dose planning methodology considering the radiological findings of interest obtained by three-dimensional magnetic resonance imaging combined with Monte Carlo simulation in radiosynoviorthesis treatment applied to hemophilic arthropathy. The parameters analyzed were: surface area of the synovial membrane (synovial size), synovial thickness and joint effusion obtained by 3D MRI of nine knees from nine patients on a SIEMENS AVANTO 1.5 T scanner using a knee coil. The 3D Slicer software performed both the semiautomatic segmentation and quantitation of these radiological findings. A Lucite phantom 3D MRI validated the quantitation methodology. The study used Monte Carlo N-Particle eXtended code version 2.6 for calculating the S-values required to set up the injected activity to deliver a 100 Gy absorbed dose at a determined synovial thickness. The radionuclides assessed were: 90Y, 32P, 188Re, 186Re, 153Sm, and 177Lu, and the present study shows their effective treatment ranges. The quantitation methodology was successfully tested, with an error below 5% for different materials. S-values calculated could provide data on the activity to be injected into the joint, considering no extra-articular leakage from joint cavity. Calculation of effective treatment range could assist with the therapeutic decision, with an optimized protocol for dose prescription in RSO. Using 3D Slicer software, this study focused on segmentation and quantitation of radiological features such as joint effusion, synovial size, and thickness, all obtained by 3D MRI in patients' knees with hemophilic arthropathy. The combination of synovial size and thickness with the parameters obtained by Monte Carlo

  3. A 3D kinetic Monte Carlo simulation study of resistive switching processes in Ni/HfO2/Si-n+-based RRAMs

    NASA Astrophysics Data System (ADS)

    Aldana, S.; García-Fernández, P.; Rodríguez-Fernández, Alberto; Romero-Zaliz, R.; González, M. B.; Jiménez-Molinos, F.; Campabadal, F.; Gómez-Campos, F.; Roldán, J. B.

    2017-08-01

    A new RRAM simulation tool based on a 3D kinetic Monte Carlo algorithm has been implemented. The redox reactions and migration of cations are developed taking into consideration the temperature and electric potential 3D distributions within the device dielectric at each simulation time step. The filamentary conduction has been described by obtaining the percolation paths formed by metallic atoms. Ni/HfO2/Si-n+ unipolar devices have been fabricated and measured. The different experimental characteristics of the devices under study have been reproduced with accuracy by means of simulations. The main physical variables can be extracted at any simulation time to clarify the physics behind resistive switching; in particular, the final conductive filament shape can be studied in detail.

  4. 3D Direct Simulation Monte Carlo Modelling of the Inner Gas Coma of Comet 67P/Churyumov-Gerasimenko: A Parameter Study

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.

    2016-03-01

    Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.

  5. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  6. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    SciTech Connect

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; Sen, S.

    2016-12-01

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed to meet the required capabilities.

  7. Full-Band Monte Carlo Analysis of Hot-Carrier Light Emission in GaAs

    NASA Astrophysics Data System (ADS)

    Ferretti, I.; Abramo, A.; Brunetti, R.; Jacobini, C.

    1997-11-01

    A computational analysis of light emission from hot carriers in GaAs due to direct intraband conduction-conduction (c-c) transitions is presented. The emission rates have been evaluated by means of a Full-Band Monte-Carlo simulator (FBMC). Results have been obtained for the emission rate as a function of the photon energy, for the emitted and absorbed light polarization along and perpendicular to the electric field direction. Comparison has been made with available experimental data in MESFETs.

  8. Markov chain Monte Carlo linkage analysis of a complex qualitative phenotype.

    PubMed

    Hinrichs, A; Lin, J H; Reich, T; Bierut, L; Suarez, B K

    1999-01-01

    We tested a new computer program, LOKI, that implements a reversible jump Markov chain Monte Carlo (MCMC) technique for segregation and linkage analysis. Our objective was to determine whether this software, designed for use with continuously distributed phenotypes, has any efficacy when applied to the discrete disease states of the simulated data from the Mordor data from GAW Problem 1. Although we were able to identify the genomic location for two of the three quantitative trait loci by repeated application of the software, the MCMC sampler experienced significant mixing problems indicating that the method, as currently formulated in LOKI, was not suitable for the discrete phenotypes in this data set.

  9. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    DOE PAGES

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...

    2016-12-01

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less

  10. Nuclear spectroscopy for in situ soil elemental analysis: Monte Carlo simulations

    SciTech Connect

    Wielopolski L.; Doron, O.

    2012-07-01

    We developed a model to simulate a novel inelastic neutron scattering (INS) system for in situ non-destructive analysis of soil using standard Monte Carlo Neutron Photon (MCNP5a) transport code. The volumes from which 90%, 95%, and 99% of the total signal are detected were estimated to be 0.23 m{sup 3}, 0.37 m{sup 3}, and 0.79 m{sup 3}, respectively. Similarly, we assessed the instrument's sampling footprint and depths. In addition we discuss the impact of the carbon's depth distribution on sampled depth.

  11. Analysis of light propagation in highly scattering media by path-length-assigned Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ishii, Katsuhiro; Nishidate, Izumi; Iwai, Toshiaki

    2014-05-01

    Numerical analysis of optical propagation in highly scattering media is investigated when light is normally incident to the surface and re-emerges backward from the same point. This situation corresponds to practical light scattering setups, such as in optical coherence tomography. The simulation uses the path-length-assigned Monte Carlo method based on an ellipsoidal algorithm. The spatial distribution of the scattered light is determined and the dependence of its width and penetration depth on the path-length is found. The backscattered light is classified into three types, in which ballistic, snake, and diffuse photons are dominant.

  12. Frequency analysis of GaN MESFETs using full-band cellular Monte Carlo

    NASA Astrophysics Data System (ADS)

    Yamakawa, S.; Goodnick, S. M.; Branlard, J.; Saraniti, M.

    2005-05-01

    A full-band electron transport calculation in wurtzite phase GaN based on a detailed model of the electron-phonon interactions using a Cellular Monte Carlo (CMC) approach is applied to the frequency analysis of MESFETs. Realistic polar-optical phonon, impurity, piezoelectric and dislocation scattering is included in the full-band CMC simulator, which shows good agreement with measured velocity-field data. The effect of the dislocation scattering on the MESFET RF characteristics is examined as well, indicating that the computed cut-off frequency is affected by the crystal dislocation density and bias conditions.

  13. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  14. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  15. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples

    NASA Astrophysics Data System (ADS)

    Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.

    2015-08-01

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.

  16. Monte Carlo analysis of uncertainties in the Netherlands greenhouse gas emission inventory for 1990-2004

    NASA Astrophysics Data System (ADS)

    Ramírez, Andrea; de Keizer, Corry; Van der Sluijs, Jeroen P.; Olivier, Jos; Brandes, Laurens

    This paper presents an assessment of the value added of a Monte Carlo analysis of the uncertainties in the Netherlands inventory of greenhouse gases over a Tier 1 analysis. It also examines which parameters contributed the most to the total emission uncertainty and identified areas of high priority for the further improvement of the accuracy and quality of the inventory. The Monte Carlo analysis resulted in an uncertainty range in total GHG emissions of 4.1% in 2004 and 5.4% in 1990 (with LUCF) and 5.3% (in 1990) and 3.9% (in 2004) for GHG emissions without LUCF. Uncertainty in the trend was estimated at 4.5%. The values are in the same order of magnitude as those estimated in the Tier 1. The results show that accounting for correlation among parameters is important, and for the Netherlands inventory it has a larger impact on the uncertainty in the trend than on the uncertainty in the total GHG emissions. The main contributors to overall uncertainty are found to be related to N 2O emissions from agricultural soils, the N 2O implied emission factors of Nitric Acid Production, CH 4 from managed solid waste disposal on land, and the implied emission factor of CH 4 from manure management from cattle.

  17. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    SciTech Connect

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  18. Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra

    SciTech Connect

    Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor

    2009-09-15

    The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.

  19. Modeling intermittent generation (IG) in a Monte-Carlo regional system analysis model

    SciTech Connect

    Yamayee, Z.A.

    1984-01-01

    A simulation model capable of simulating the operation of a given load/resource scenario is developed under the umbrella of PNUCC's System Analysis Committee. This model, called System Analysis Model (SAM), employs the Monte-Carlo technique to incorporate quantifiable uncertainties. Explicit uncertainties in SAM include: hydro conditions, load forecast errors, construction duration, availability of thermal units, renewable resources (wind, solar, geothermal, and biomass), cogeneration, and conservation. This paper presents an approach to modeling renewable resources, especially wind energy availability. Due to randomness of wind velocity at a wind site, and randomness from one site to another, it is important to have a model of uncertain wind energy availability. The model starts with historical hourly wind data at each site in the area covered by the Pacific Northwest Power Act (7). Using wind data, machine and site characteristics, along with Justus, et al. time series model for simulating hourly wind power, hourly energy for each site is calculated. Assuming independence between different sites, a probability density function for each month is computed. These density functions along with a uniformly distributed random number generator are used to draw observed seasonal and/or monthly energy for each of the Monte-Carlo games. The monthly observed energy along with a typical hourly shape for a month are used to calculate hourly observed wind energy for the hourly portion of SAM. A sample case study is made to show the approach.

  20. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    SciTech Connect

    Pratama, Cecep; Meilano, Irwan; Nugraha, Andri Dian

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  1. Enhancing backbone sampling in Monte Carlo simulations using internal coordinates normal mode analysis.

    PubMed

    Gil, Victor A; Lecina, Daniel; Grebner, Christoph; Guallar, Victor

    2016-10-15

    Normal mode methods are becoming a popular alternative to sample the conformational landscape of proteins. In this study, we describe the implementation of an internal coordinate normal mode analysis method and its application in exploring protein flexibility by using the Monte Carlo method PELE. This new method alternates two different stages, a perturbation of the backbone through the application of torsional normal modes, and a resampling of the side chains. We have evaluated the new approach using two test systems, ubiquitin and c-Src kinase, and the differences to the original ANM method are assessed by comparing both results to reference molecular dynamics simulations. The results suggest that the sampled phase space in the internal coordinate approach is closer to the molecular dynamics phase space than the one coming from a Cartesian coordinate anisotropic network model. In addition, the new method shows a great speedup (∼5-7×), making it a good candidate for future normal mode implementations in Monte Carlo methods.

  2. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    ERIC Educational Resources Information Center

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  3. Timing resolution of scintillation-detector systems: a Monte Carlo analysis

    PubMed Central

    Choong, Woon-Seng

    2010-01-01

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use a Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response, and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a 2-threshold or 3-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time, and decreasing

  4. CAD-Based Monte Carlo Neutron Transport KSTAR Analysis for KSTAR

    NASA Astrophysics Data System (ADS)

    Seo, Geon Ho; Choi, Sung Hoon; Shim, Hyung Jin

    2017-09-01

    The Monte Carlo (MC) neutron transport analysis for a complex nuclear system such as fusion facility may require accurate modeling of its complicated geometry. In order to take advantage of modeling capability of the computer aided design (CAD) system for the MC neutronics analysis, the Seoul National University MC code, McCARD, has been augmented with a CAD-based geometry processing module by imbedding the OpenCASCADE CAD kernel. In the developed module, the CAD geometry data are internally converted to the constructive solid geometry model with help of the CAD kernel. An efficient cell-searching algorithm is devised for the void space treatment. The performance of the CAD-based McCARD calculations are tested for the Korea Superconducting Tokamak Advanced Research device by comparing with results of the conventional MC calculations using a text-based geometry input.

  5. A bottom collider vertex detector design, Monte-Carlo simulation and analysis package

    SciTech Connect

    Lebrun, P.

    1990-10-01

    A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.

  6. Monte-carlo simulation of the prompt gamma neutron activation analysis system with a femtosecond laser

    NASA Astrophysics Data System (ADS)

    Shim, Hyunha; Hong, Byungsik; Lee, Kyong-Sei; Lee, Sungman; Cha, Hyungki

    2012-09-01

    The prompt gamma neutron activation analysis (PGNAA) system is a useful tool to detect the concentrations of the various composite elements of a sample by measuring the prompt gammas that are activated by neutrons. The composition in terms of the constituent elements is essential information for the identification of the material species of any unknown object. A PGNAA system initiated by a high-power laser has been designed and optimized by using a Monte-Carlo simulation. In order to improve the signal-to-background ratio, we designed an improved neutron-shielding structure and imposed a proper time window in the analysis. In particular, the yield ratio of nitrogen to carbon in a TNT sample was investigated in detail. These simulation results demonstrate that the gamma rays from an explosive sample under a vast level of background can indeed be identified.

  7. Monte Carlo simulation of prompt gamma neutron activation analysis using MCNP code.

    PubMed

    Evans, C J; Ryde, S J; Hancock, D A; al-Agel, F

    1998-01-01

    Prompt gamma neutron activation analysis (PGNAA) is the most direct method of measuring total-body nitrogen. In combination with internal hydrogen standardisation, it is possible to reduce the dependence on body habitus. The uniformity of activation and detection, however, cannot be optimised sufficiently to eliminate the dependence entirely, and so further corrections are essential. The availability of the powerful Monte Carlo code MCNP(4A) has allowed a more accurate analysis of the activation facility, and yields corrections for body habitus and superficial fat layers. The accuracy of the correction is retained as the source-to-skin distance is reduced, although the activation uniformity is thereby degraded. This allows the use of a 252Cf source with lower activity and hence reduces the running cost of the facility.

  8. Is anoxic depolarisation associated with an ADC threshold? A Markov chain Monte Carlo analysis.

    PubMed

    King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G

    2005-12-01

    A Bayesian nonlinear hierarchical random coefficients model was used in a reanalysis of a previously published longitudinal study of the extracellular direct current (DC)-potential and apparent diffusion coefficient (ADC) responses to focal ischaemia. The main purpose was to examine the data for evidence of an ADC threshold for anoxic depolarisation. A Markov chain Monte Carlo simulation approach was adopted. The Metropolis algorithm was used to generate three parallel Markov chains and thus obtain a sampled posterior probability distribution for each of the DC-potential and ADC model parameters, together with a number of derived parameters. The latter were used in a subsequent threshold analysis. The analysis provided no evidence indicating a consistent and reproducible ADC threshold for anoxic depolarisation.

  9. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  10. Monte Carlo models and analysis of galactic disk gamma-ray burst distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon

    1989-01-01

    Gamma-ray bursts are transient astronomical phenomena which have no quiescent counterparts in any region of the electromagnetic spectrum. Although temporal and spectral properties indicate that these events are likely energetic, their unknown spatial distribution complicates astrophysical interpretation. Monte Carlo samples of gamma-ray burst sources are created which belong to Galactic disk populations. Spatial analysis techniques are used to compare these samples to the observed distribution. From this, both quantitative and qualitative conclusions are drawn concerning allowed luminosity and spatial distributions of the actual sample. Although the Burst and Transient Source Experiment (BATSE) experiment on Gamma Ray Observatory (GRO) will significantly improve knowledge of the gamma-ray burst source spatial characteristics within only a few months of launch, the analysis techniques described herein will not be superceded. Rather, they may be used with BATSE results to obtain detailed information about both the luminosity and spatial distributions of the sources.

  11. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  12. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  13. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    SciTech Connect

    Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.

    2013-07-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  14. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    SciTech Connect

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  15. STS-1 operational flight profile. Volume 5: Descent, cycle 3. Appendix C: Monte Carlo dispersion analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of three nonlinear the Monte Carlo dispersion analyses for the Space Transportation System 1 Flight (STS-1) Orbiter Descent Operational Flight Profile, Cycle 3 are presented. Fifty randomly selected simulation for the end of mission (EOM) descent, the abort once around (AOA) descent targeted line are steep target line, and the AOA descent targeted to the shallow target line are analyzed. These analyses compare the flight environment with system and operational constraints on the flight environment and in some cases use simplified system models as an aid in assessing the STS-1 descent flight profile. In addition, descent flight envelops are provided as a data base for use by system specialists to determine the flight readiness for STS-1. The results of these dispersion analyses supersede results of the dispersion analysis previously documented.

  16. Converting Boundary Representation Solid Models to Half-Space Representation Models for Monte Carlo Analysis

    SciTech Connect

    Davis JE, Eddy MJ, Sutton TM, Altomari TJ

    2007-03-01

    Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.

  17. Techno-economic and Monte Carlo probabilistic analysis of microalgae biofuel production system.

    PubMed

    Batan, Liaw Y; Graff, Gregory D; Bradley, Thomas H

    2016-11-01

    This study focuses on the characterization of the technical and economic feasibility of an enclosed photobioreactor microalgae system with annual production of 37.85 million liters (10 million gallons) of biofuel. The analysis characterizes and breaks down the capital investment and operating costs and the production cost of unit of algal diesel. The economic modelling shows total cost of production of algal raw oil and diesel of $3.46 and $3.69 per liter, respectively. Additionally, the effects of co-products' credit and their impact in the economic performance of algal-to-biofuel system are discussed. The Monte Carlo methodology is used to address price and cost projections and to simulate scenarios with probabilities of financial performance and profits of the analyzed model. Different markets for allocation of co-products have shown significant shifts for economic viability of algal biofuel system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. The Monte Carlo code CEARCPG for coincidence prompt gamma-ray neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Han, Xiaogang; Gardner, Robin P.

    2007-10-01

    Prompt gamma-ray neutron activation analysis (PGNAA) is widely used to determine the elemental composition of bulk samples. The detection sensitivities of PGNAA are often restricted by the inherent poor signal-to-noise ratio (SNR). There are many sources of noise (background) including the natural background, neutron activation of the detector, gamma-rays associated with the neutron source and prompt gamma-rays from the structural materials of the analyzer. Results of the prompt gamma-ray coincidence technique show that it could greatly improve the SNR by removing almost all of the background interferences. The first specific Monte Carlo code (CEARCPG) for coincidence PGNAA has been developed at the Center for Engineering Application of Radioisotopes (CEAR) to explore the capabilities of this technique. Benchmark bulk sample experiments have been performed with coal, sulfur, and mercury samples and indicate that the code is accurate and will be very useful in the design of coincidence PGNAA devices.

  19. Contrast to Noise Ratio and Contrast Detail Analysis in Mammography:A Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Metaxas, V.; Delis, H.; Kalogeropoulou, C.; Zampakis, P.; Panayiotakis, G.

    2015-09-01

    The mammographic spectrum is one of the major factors affecting image quality in mammography. In this study, a Monte Carlo (MC) simulation model was used to evaluate image quality characteristics of various mammographic spectra. The anode/filter combinations evaluated, were those traditionally used in mammography, for tube voltages between 26 and 30 kVp. The imaging performance was investigated in terms of Contrast to Noise Ratio (CNR) and Contrast Detail (CD) analysis, by involving human observers, utilizing a mathematical CD phantom. Soft spectra provided the best characteristics in terms of both CNR and CD scores, while tube voltage had a limited effect. W-anode spectra filtered with k-edge filters demonstrated an improved performance, that sometimes was better compared to softer x-ray spectra, produced by Mo or Rh anode. Regarding the filter material, k-edge filters showed superior performance compared to Al filters.

  20. Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount

    SciTech Connect

    Supriyadi; Srigutomo, Wahyu; Munandar, Arif

    2014-03-24

    Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.

  1. Monte Carlo Analysis of the Commissioning Phase Maneuvers of the Soil Moisture Active Passive (SMAP) Mission

    NASA Technical Reports Server (NTRS)

    Williams, Jessica L.; Bhat, Ramachandra S.; You, Tung-Han

    2012-01-01

    The Soil Moisture Active Passive (SMAP) mission will perform soil moisture content and freeze/thaw state observations from a low-Earth orbit. The observatory is scheduled to launch in October 2014 and will perform observations from a near-polar, frozen, and sun-synchronous Science Orbit for a 3-year data collection mission. At launch, the observatory is delivered to an Injection Orbit that is biased below the Science Orbit; the spacecraft will maneuver to the Science Orbit during the mission Commissioning Phase. The delta V needed to maneuver from the Injection Orbit to the Science Orbit is computed statistically via a Monte Carlo simulation; the 99th percentile delta V (delta V99) is carried as a line item in the mission delta V budget. This paper details the simulation and analysis performed to compute this figure and the delta V99 computed per current mission parameters.

  2. A MONTE CARLO ANALYSIS OF THE VELOCITY DISPERSION OF THE GLOBULAR CLUSTER PALOMAR 14

    SciTech Connect

    Sollima, A.; Nipoti, C.; Mastrobuono Battisti, A.; Montuori, M.; Capuzzo-Dolcetta, R.

    2012-01-10

    We present the results of a detailed analysis of the projected velocity dispersion of the globular cluster Palomar 14 performed using recent high-resolution spectroscopic data and extensive Monte Carlo simulations. The comparison between the data and a set of dynamical models (differing in fraction of binaries, degree of anisotropy, mass-to-light ratio M/L, cluster orbit, and theory of gravity) shows that the observed velocity dispersion of this stellar system is well reproduced by Newtonian models with a fraction of binaries f{sub b} < 30% and an M/L compatible with the predictions of stellar evolution models. Instead, models computed with a large fraction of binaries systematically overestimate the cluster velocity dispersion. We also show that, across the parameter space sampled by our simulations, models based on the modified Newtonian dynamics theory can be reconciled with observations only assuming values of M/L lower than those predicted by stellar evolution models under standard assumptions.

  3. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.

    PubMed

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time.

  4. Monte Carlo Analysis of the Commissioning Phase Maneuvers of the Soil Moisture Active Passive (SMAP) Mission

    NASA Technical Reports Server (NTRS)

    Williams, Jessica L.; Bhat, Ramachandra S.; You, Tung-Han

    2012-01-01

    The Soil Moisture Active Passive (SMAP) mission will perform soil moisture content and freeze/thaw state observations from a low-Earth orbit. The observatory is scheduled to launch in October 2014 and will perform observations from a near-polar, frozen, and sun-synchronous Science Orbit for a 3-year data collection mission. At launch, the observatory is delivered to an Injection Orbit that is biased below the Science Orbit; the spacecraft will maneuver to the Science Orbit during the mission Commissioning Phase. The delta V needed to maneuver from the Injection Orbit to the Science Orbit is computed statistically via a Monte Carlo simulation; the 99th percentile delta V (delta V99) is carried as a line item in the mission delta V budget. This paper details the simulation and analysis performed to compute this figure and the delta V99 computed per current mission parameters.

  5. Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI

    SciTech Connect

    Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.

    2008-04-15

    A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.

  6. Monte Carlo analysis of electron-positron pair creation by powerful laser-ion impact

    SciTech Connect

    Kaminski, J. Z.; Krajewska, K.; Ehlotzky, F.

    2006-09-15

    We consider electron-positron pair creation by the impact of very powerful laser pulses with highly charged ions. In contrast to our foregoing work with rather limited angular configurations of pair creation, we extend these calculations to even higher laser intensities, and we use the Monte Carlo method to numerically analyze the rates of pair creation for arbitrary angular distributions. We also evaluate the intensity dependence of the total rates of pair creation. Thus we demonstrate that our laser-induced process shows stabilization, because beyond a specific laser power the total rates of pair creation decreases. Our analysis of the angular distributions of the created electron-positron pairs leads to the conclusion that pairs are predominantly emitted in the direction of laser pulse propagation.

  7. Monte Carlo Example Programs

    SciTech Connect

    Kalos, M.

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  8. Near-field performance analysis of locally-conformal perfectly matched absorbers via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ozgun, Ozlem; Kuzuoglu, Mustafa

    2007-12-01

    In the numerical solution of some boundary value problems by the finite element method (FEM), the unbounded domain must be truncated by an artificial absorbing boundary or layer to have a bounded computational domain. The perfectly matched layer (PML) approach is based on the truncation of the computational domain by a reflectionless artificial layer which absorbs outgoing waves regardless of their frequency and angle of incidence. In this paper, we present the near-field numerical performance analysis of our new PML approach, which we call as locally-conformal PML, using Monte Carlo simulations. The locally-conformal PML method is an easily implementable conformal PML implementation, to the problem of mesh truncation in the FEM. The most distinguished feature of the method is its simplicity and flexibility to design conformal PMLs over challenging geometries, especially those with curvature discontinuities, in a straightforward way without using artificial absorbers. The method is based on a special complex coordinate transformation which is 'locally-defined' for each point inside the PML region. The method can be implemented in an existing FEM software by just replacing the nodal coordinates inside the PML region by their complex counterparts obtained via complex coordinate transformation. We first introduce the analytical derivation of the locally-conformal PML method for the FEM solution of the two-dimensional scalar Helmholtz equation arising in the mathematical modeling of various steady-state (or, time-harmonic) wave phenomena. Then, we carry out its numerical performance analysis by means of some Monte Carlo simulations which consider both the problem of constructing the two-dimensional Green's function, and some specific cases of electromagnetic scattering.

  9. Advantage of 3D volumetric dosemeter in delivery quality assurance of dynamic arc therapy: comparison of pencil beam and Monte Carlo calculations

    PubMed Central

    Shin, H-J; Song, J H; Jung, J-Y; Kwak, Y-K; Kay, C S; Kang, Y-N; Choi, B O; Jang, H S

    2013-01-01

    Objective: To evaluate the accuracy of pencil beam calculation (PBC) and Monte Carlo calculation (MCC) for dynamic arc therapy (DAT) in a cylindrically shaped homogenous phantom, by comparing the two plans with an ion chamber, a film and a three-dimensional (3D) volumetric dosemeter. Methods: For this study, an in-house phantom was constructed, and the PBC and MCC plans for DAT were performed using iPlan® RT (BrainLAB®, Heimstetten, Germany). The A16 micro ion chamber (Standard Imaging, Middleton, WI), Gafchromic® EBT2 film (International Specialty Products, Wayne, NJ) and ArcCHECK™ (Sun Nuclear, Melbourne, FL) were used for measurements. For comparison with each plan, two-dimensional (2D) and 3D gamma analyses were performed using 3%/3 mm and 2%/2 mm criteria. Results: The difference between the PBC and MCC plans using 2D and 3D gamma analyses was found to be 7.85% and 28.8%, respectively. The ion chamber and 2D dose distribution measurements did not exhibit this difference revealed by the comparison between the PBC and MCC plans. However, the 3D assessment showed a significant difference between the PBC and MCC (62.7% for PBC vs 93.4% for MCC, p = 0.034). Conclusion: Evaluation using a 3D volumetric dosemeter can be clinically useful for delivery quality assurance (QA), and the MCC should be used to achieve the most reliable dose calculation for DAT. Advances in knowledge: (1) The DAT plan calculated using the PBC has a limitation in the calculation methods, and a 3D volumetric dosemeter was found to be an adequate tool for delivery QA of DAT. (2) The MCC was superior to PBC in terms of the accuracy in dose calculation for DAT even in the homogenous condition. PMID:24234583

  10. 3D numerical modelling of the steady-state thermal regime constrained by surface heat flow data: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Cruden, A. R.

    2014-12-01

    Uncertainty of the lithospheric thermal regime greatly increases with depth. Measurements of temperature gradient and crustal rheology are concentrated in the upper crust, whereas the majority of the lithospheric measurements are approximated using empirical depth-dependent functions. We have applied a Monte Carlo approach to test the variation of crustal heat flow with temperature-dependent conductivity and the redistribution of heat-producing elements. The dense population of precision heat flow data in Victoria, Southeast Australia offers the ideal environment to test the variation of heat flow. A stochastically consistent anomalous zone of impossibly high Moho temperatures in the 3D model (> 900°C) correlates well with a zone of low teleseismic velocity and high electrical conductivity. This indicates that transient heat transfer has perturbed the thermal gradient and therefore a steady-state approach to 3D modelling is inappropriate in this zone. A spatial correlation between recent intraplate volcanic eruption points (< 5 Ma) and elevated Moho temperatures is a potential origin for additional latent heat in the crust.

  11. Monte Carlo - Metropolis Investigations of Shape and Matrix Effects in 2D and 3D Spin-Crossover Nanoparticles

    NASA Astrophysics Data System (ADS)

    Guerroudj, Salim; Caballero, Rafael; De Zela, Francisco; Jureschi, Catalin; Linares, Jorge; Boukheddaden, Kamel

    2016-08-01

    The Ising like model, taking into account short-, long-range interaction as well as surface effects is used to investigate size and shape effects on the thermal behaviour of 2D and 3D spin crossover (SCO) nanoparticles embedded in a matrix. We analyze the role of the parametert, representing the ratio between the number of surface and volume molecules, on the unusual thermal hysteresis behaviour (appearance of the hysteresis and a re-entrance phase transition) at small scales.

  12. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    SciTech Connect

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A; Nagle, Nicholas N; Stewart, Robert N

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  13. Generation of SFR few-group constants using the Monte Carlo code Serpent

    SciTech Connect

    Fridman, E.; Rachamin, R.; Shwageraus, E.

    2013-07-01

    In this study, the Serpent Monte Carlo code was used as a tool for preparation of homogenized few-group cross sections for the nodal diffusion analysis of Sodium cooled Fast Reactor (SFR) cores. Few-group constants for two reference SFR cores were generated by Serpent and then employed by nodal diffusion code DYN3D in 2D full core calculations. The DYN3D results were verified against the references full core Serpent Monte Carlo solutions. A good agreement between the reference Monte Carlo and nodal diffusion results was observed demonstrating the feasibility of using Serpent for generation of few-group constants for the deterministic SFR analysis. (authors)

  14. The analog linear interpolation approach for Monte Carlo simulation of prompt gamma-ray neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Wenchao

    The Monte Carlo code (CEARPGA 1) was developed to generate the elemental library spectra required for implementing the Monte Carlo Library Least-Squares algorithm for prompt gamma-ray neutron activation analysis (PGNAA). The existing big weight problem in which a few histories yield very large weights with very large variance has been investigated thoroughly. It has been found that the expected value splitting technique, a powerful variance reduction technique used in the code is the primary cause of this problem. Two Monte Carlo simulation approaches have been investigated to eliminate the big weight problem while still maintaining high efficiency. They are (1) score importance map with batch tracking and (2) analog linear interpolation. Both approaches demonstrated to be feasible for solving the big weight problem. The analog linear interpolation approach was finally selected and implemented in the new CEARPGA Monte Carlo code (CEARPGA II). A comparison of the simulated results by CEARPGA I, CEARPGA II and MCNP with the experimentally measured data shows that the big weight problem has been successfully eliminated, the accuracy of the simulation has improved greatly, and the simulated results agree very well with the measured data. In addition, some other important improvements to this code to enhance its accuracy and efficiency have also been introduced, including: (1) adding the tracking of annihilation gamma rays outside of the detector, (2) using the improved detector response functions, (3) generating individual natural background libraries, (4) adding the neutron activation backgrounds, and (5) adopting a general geometry package etc.

  15. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  16. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  17. Patient-Specific 3D Pretreatment and Potential 3D Online Dose Verification of Monte Carlo-Calculated IMRT Prostate Treatment Plans

    SciTech Connect

    Boggula, Ramesh; Jahnke, Lennart; Wertz, Hansjoerg; Lohr, Frank; Wenz, Frederik

    2011-11-15

    Purpose: Fast and reliable comprehensive quality assurance tools are required to validate the safety and accuracy of complex intensity-modulated radiotherapy (IMRT) plans for prostate treatment. In this study, we evaluated the performance of the COMPASS system for both off-line and potential online procedures for the verification of IMRT treatment plans. Methods and Materials: COMPASS has a dedicated beam model and dose engine, it can reconstruct three-dimensional dose distributions on the patient anatomy based on measured fluences using either the MatriXX two-dimensional (2D) array (offline) or a 2D transmission detector (T2D) (online). For benchmarking the COMPASS dose calculation, various dose-volume indices were compared against Monte Carlo-calculated dose distributions for five prostate patient treatment plans. Gamma index evaluation and absolute point dose measurements were also performed in an inhomogeneous pelvis phantom using extended dose range films and ion chamber for five additional treatment plans. Results: MatriXX-based dose reconstruction showed excellent agreement with the ion chamber (<0.5%, except for one treatment plan, which showed 1.5%), film ({approx}100% pixels passing gamma criteria 3%/3 mm) and mean dose-volume indices (<2%). The T2D based dose reconstruction showed good agreement as well with ion chamber (<2%), film ({approx}99% pixels passing gamma criteria 3%/3 mm), and mean dose-volume indices (<5.5%). Conclusion: The COMPASS system qualifies for routine prostate IMRT pretreatment verification with the MatriXX detector and has the potential for on-line verification of treatment delivery using T2D.

  18. Evaluation of a 3D point spread function (PSF) model derived from Monte Carlo simulation for a small animal PET scanner

    NASA Astrophysics Data System (ADS)

    Yao, Rutao; Ramachandra, Ranjith M.; Panse, Ashish; Balla, Deepika; Yan, Jianhua; Carson, Richard E.

    2010-04-01

    We previously designed a component based 3-D PSF model to obtain a compact yet accurate system matrix for a dedicated human brain PET scanner. In this work, we adapted the model to a small animal PET scanner. Based on the model, we derived the system matrix for back-to-back gamma source in air, fluorine-18 and iodine-124 source in water by Monte Carlo simulation. The characteristics of the PSF model were evaluated and the performance of the newly derived system matrix was assessed by comparing its reconstructed images with the established reconstruction program provided on the animal PET scanner. The new system matrix showed strong PSF dependency on the line-of-response (LOR) incident angle and LOR depth. This confirmed the validity of the two components selected for the model. The effect of positron range on the system matrix was observed by comparing the PSFs of different isotopes. A simulated and an experimental hot-rod phantom study showed that the reconstruction with the proposed system matrix achieved better resolution recovery as compared to the algorithm provided by the manufacturer. Quantitative evaluation also showed better convergence to the expected contrast value at similar noise level. In conclusion, it has been shown that the system matrix derivation method is applicable to the animal PET system studied, suggesting that the method may be used for other PET systems and different isotope applications.

  19. 3D visualisation of the stochastic patterns of the radial dose in nano-volumes by a Monte Carlo simulation of HZE ion track structure.

    PubMed

    Plante, Ianik; Ponomarev, Artem; Cucinotta, Francis A

    2011-02-01

    The description of energy deposition by high charge and energy (HZE) nuclei is of importance for space radiation risk assessment and due to their use in hadrontherapy. Such ions deposit a large fraction of their energy within the so-called core of the track and a smaller proportion in the penumbra (or track periphery). We study the stochastic patterns of the radial dependence of energy deposition using Monte Carlo track structure codes RITRACKS and RETRACKS, that were used to simulate HZE tracks and calculate energy deposition in voxels of 40 nm. The simulation of a (56)Fe(26+) ion of 1 GeV u(-1) revealed zones of high-energy deposition which maybe found as far as a few millimetres away from the track core in some simulations. The calculation also showed that ∼43 % of the energy was deposited in the penumbra. These 3D stochastic simulations combined with a visualisation interface are a powerful tool for biophysicists which may be used to study radiation-induced biological effects such as double strand breaks and oxidative damage and the subsequent cellular and tissue damage processing and signalling.

  20. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei

    2014-09-01

    Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  1. Derivation of landslide-triggering thresholds by Monte Carlo simulation and ROC analysis

    NASA Astrophysics Data System (ADS)

    Peres, David Johnny; Cancelliere, Antonino

    2015-04-01

    Rainfall thresholds of landslide-triggering are useful in early warning systems to be implemented in prone areas. Direct statistical analysis of historical records of rainfall and landslide data presents different shortcomings typically due to incompleteness of landslide historical archives, imprecise knowledge of the triggering instants, unavailability of a rain gauge located near the landslides, etc. In this work, a Monte Carlo approach to derive and evaluate landslide triggering thresholds is presented. Such an approach contributes to overcome some of the above mentioned shortcomings of direct empirical analysis of observed data. The proposed Monte Carlo framework consists in the combination of a rainfall stochastic model with hydrological and slope-stability model. Specifically, 1000-years long hourly synthetic rainfall and related slope stability factor of safety data are generated by coupling the Neyman-Scott rectangular pulses model with the TRIGRS unsaturated model (Baum et al., 2008) and a linear-reservoir water table recession model. Triggering and non-triggering rainfall events are then distinguished and analyzed to derive stochastic-input physically based thresholds that optimize the trade-off between correct and wrong predictions. For this purpose, receiver operating characteristic (ROC) indices are used. An application of the method to the highly landslide-prone area of the Peloritani mountains in north-eastern Sicily (Italy) is carried out. A threshold for the area is derived and successfully validated by comparison with thresholds proposed by other researchers. Moreover, the uncertainty in threshold derivation due to variability of rainfall intensity within events and to antecedent rainfall is investigated. Results indicate that variability of intensity during rainfall events influences significantly rainfall intensity and duration associated with landslide triggering. A representation of rainfall as constant-intensity hyetographs globally leads to

  2. Empirical Markov Chain Monte Carlo Bayesian analysis of fMRI data.

    PubMed

    de Pasquale, F; Del Gratta, C; Romani, G L

    2008-08-01

    In this work an Empirical Markov Chain Monte Carlo Bayesian approach to analyse fMRI data is proposed. The Bayesian framework is appealing since complex models can be adopted in the analysis both for the image and noise model. Here, the noise autocorrelation is taken into account by adopting an AutoRegressive model of order one and a versatile non-linear model is assumed for the task-related activation. Model parameters include the noise variance and autocorrelation, activation amplitudes and the hemodynamic response function parameters. These are estimated at each voxel from samples of the Posterior Distribution. Prior information is included by means of a 4D spatio-temporal model for the interaction between neighbouring voxels in space and time. The results show that this model can provide smooth estimates from low SNR data while important spatial structures in the data can be preserved. A simulation study is presented in which the accuracy and bias of the estimates are addressed. Furthermore, some results on convergence diagnostic of the adopted algorithm are presented. To validate the proposed approach a comparison of the results with those from a standard GLM analysis, spatial filtering techniques and a Variational Bayes approach is provided. This comparison shows that our approach outperforms the classical analysis and is consistent with other Bayesian techniques. This is investigated further by means of the Bayes Factors and the analysis of the residuals. The proposed approach applied to Blocked Design and Event Related datasets produced reliable maps of activation.

  3. Dynamic mineral clouds on HD 189733b. II. Monte Carlo radiative transfer for 3D cloudy exoplanet atmospheres: combining scattering and emission spectra

    NASA Astrophysics Data System (ADS)

    Lee, G. K. H.; Wood, K.; Dobbs-Dixon, I.; Rice, A.; Helling, Ch.

    2017-05-01

    Context. As the 3D spatial properties of exoplanet atmospheres are being observed in increasing detail by current and new generations of telescopes, the modelling of the 3D scattering effects of cloud forming atmospheres with inhomogeneous opacity structures becomes increasingly important to interpret observational data. Aims: We model the scattering and emission properties of a simulated cloud forming, inhomogeneous opacity, hot Jupiter atmosphere of HD 189733b. We compare our results to available Hubble Space Telescope (HST) and Spitzer data and quantify the effects of 3D multiple scattering on observable properties of the atmosphere. We discuss potential observational properties of HD 189733b for the upcoming Transiting Exoplanet Survey Satellite (TESS) and CHaracterising ExOPlanet Satellite (CHEOPS) missions. Methods: We developed a Monte Carlo radiative transfer code and applied it to post-process output of our 3D radiative-hydrodynamic, cloud formation simulation of HD 189733b. We employed three variance reduction techniques, i.e. next event estimation, survival biasing, and composite emission biasing, to improve signal to noise of the output. For cloud particle scattering events, we constructed a log-normal area distribution from the 3D cloud formation radiative-hydrodynamic results, which is stochastically sampled in order to model the Rayleigh and Mie scattering behaviour of a mixture of grain sizes. Results: Stellar photon packets incident on the eastern dayside hemisphere show predominantly Rayleigh, single-scattering behaviour, while multiple scattering occurs on the western hemisphere. Combined scattered and thermal emitted light predictions are consistent with published HST and Spitzer secondary transit observations. Our model predictions are also consistent with geometric albedo constraints from optical wavelength ground-based polarimetry and HST B band measurements. We predict an apparent geometric albedo for HD 189733b of 0.205 and 0.229, in the

  4. Development of Monte Carlo code for coincidence prompt gamma-ray neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Han, Xiaogang

    Prompt Gamma-Ray Neutron Activation Analysis (PGNAA) offers a non-destructive, relatively rapid on-line method for determination of elemental composition of bulk and other samples. However, PGNAA has an inherently large background. These backgrounds are primarily due to the presence of the neutron excitation source. It also includes neutron activation of the detector and the prompt gamma rays from the structure materials of PGNAA devices. These large backgrounds limit the sensitivity and accuracy of PGNAA. Since most of the prompt gamma rays from the same element are emitted in coincidence, a possible approach for further improvement is to change the traditional PGNAA measurement technique and introduce the gamma-gamma coincidence technique. It is well known that the coincidence techniques can eliminate most of the interference backgrounds and improve the signal-to-noise ratio. A new Monte Carlo code, CEARCPG has been developed at CEAR to simulate gamma-gamma coincidence spectra in PGNAA experiment. Compared to the other existing Monte Carlo code CEARPGA I and CEARPGA II, a new algorithm of sampling the prompt gamma rays produced from neutron capture reaction and neutron inelastic scattering reaction, is developed in this work. All the prompt gamma rays are taken into account by using this new algorithm. Before this work, the commonly used method is to interpolate the prompt gamma rays from the pre-calculated gamma-ray table. This technique works fine for the single spectrum. However it limits the capability to simulate the coincidence spectrum. The new algorithm samples the prompt gamma rays from the nucleus excitation scheme. The primary nuclear data library used to sample the prompt gamma rays comes from ENSDF library. Three cases are simulated and the simulated results are benchmarked with experiments. The first case is the prototype for ETI PGNAA application. This case is designed to check the capability of CEARCPG for single spectrum simulation. The second

  5. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  6. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    PubMed

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  7. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  8. Comparison of 2D temperature maps recorded during laser-induced thermal tissue treatment with corresponding temperature distributions calculated from 3D Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Busse, Harald; Bublat, Martin; Ratering, Ralf; Rassek, Margarethe; Schwarzmaier, Hans-Joachim; Kahn, Thomas

    2000-05-01

    Minimally invasive techniques often require special biomedical monitoring schemes. In the case of laser coagulation of tumors accurate temperature mapping is desirable for therapy control. While magnetic resonance (MR)-based thermometry can easily yield qualitative results it is still difficult to calibrate this technique with independent temperature probes for the entire 2D field of view. Calculated temperature maps derived from Monte-Carlo simulations (MCS), on the other hand, are suitable for therapy planning and dosimetry but typically can not account for the extract individual tissue parameters and physiological changes upon heating. In this work, online thermometry was combined with MCS techniques to explore the feasibility and potential of such a biomodal approach for surgical assist systems. For the first time, the result of a 3D simulation were evaluated with MR techniques. An MR thermometry system was used to monitor the temperature evolution during laser-induced thermal treatment of bovine liver using a commercially available water-cooled applicator. A systematic comparison between MR-derived 2D temperature maps in different orientations and corresponding snapshots of a 3D MCS of the laser-induced processes is presented. The MCS is capable of resolving the complex temperature patterns observed in the MR-derived images and yields a good agreement with respect to absolute temperatures and damage volume dimensions. The observed quantitative agreement is around 10 degrees C and on the order of 10 percent, respectively. The integrated simulation-and-monitoring approach has the potential to improve surgical assistance during thermal interventions.

  9. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    SciTech Connect

    Hall, Howard L

    2012-01-01

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  10. Propensity score applied to survival data analysis through proportional hazards models: a Monte Carlo study.

    PubMed

    Gayat, Etienne; Resche-Rigon, Matthieu; Mary, Jean-Yves; Porcher, Raphaël

    2012-01-01

    Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity-score-matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity-score-matched data, using a robust estimator of the variance.

  11. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  12. Ligand-receptor binding kinetics in surface plasmon resonance cells: a Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Carroll, Jacob; Raum, Matthew; Forsten-Williams, Kimberly; Täuber, Uwe C.

    2016-12-01

    Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of spatio-temporal correlations in binary reaction kinetics in SPR cell geometries, and demonstrate the failure of a mean-field analysis of SPR cells in the regime of high Damköhler number {{Da}}\\gt 0.1, where the spatio-temporal correlations due to diffusive transport and ligand-receptor rebinding events dominate the dynamics of SPR systems.

  13. A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers

    NASA Technical Reports Server (NTRS)

    Green, R. N.; Hoffman, L. H.; Young, G. R.

    1972-01-01

    A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.

  14. Model Reduction Using Principal Component Analysis and Markov Chain Monte Carlo for Hydrogeological Inverse Problems

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Rathore, S.; Chen, J.; Hoversten, G. M.; Luo, J.

    2016-12-01

    Inverse problems in hydrogeological applications often require estimation of a large number of unknown parameters ranging from hundreds to millions. Such problems are computationally prohibitive. To efficiently deal with such high-dimensional problems, model reduction techniques are usually introduced to improve computational performance of traditional inversion method. In this study, we explored the feasibility and effectiveness of Principal Component Analysis (PCA) and Markov Chain Monte Carlo (MCMC) for model reduction using error-involved synthetic data. A 1-D groundwater pumping test is implemented on randomly generated hydraulic conductivity field, then computed head distribution adding random errors is treated as available data for inversing the original hydraulic conductivity field. We run full-dimensional inverse method a few times to generate training set for constructing experienced covariance matrix. Principal Component Analysis is implemented on the experienced covariance matrix to reduce dimensionality of the inverse problem. MCMC is implemented to draw samples from the reduced variable space for providing best estimate and quantifying uncertainty. The synthetic data study demonstrates that PCA-MCMC method can successfully provide reasonable estimate of hydraulic conductivity using biased data and effectively reduce computational time and storage usage. It is also noticed that a tradeoff exists between model simplicity and uncertainty quantification - a highly-reduced model causes narrower confidential intervals, sometimes implying insufficient uncertainty quantification. Thus the extent of model reduction should be wisely manipulated in light of specific problem requirements.

  15. Analysis of nanoparticle agglomeration in aqueous suspensions via constant-number Monte Carlo simulation.

    PubMed

    Liu, Haoyang Haven; Surawanvijit, Sirikarn; Rallo, Robert; Orkoulas, Gerassimos; Cohen, Yoram

    2011-11-01

    A constant-number direct simulation Monte Carlo (DSMC) model was developed for the analysis of nanoparticle (NP) agglomeration in aqueous suspensions. The modeling approach, based on the "particles in a box" simulation method, considered both particle agglomeration and gravitational settling. Particle-particle agglomeration probability was determined based on the classical Derjaguin-Landau-Verwey-Overbeek (DLVO) theory and considerations of the collision frequency as impacted by Brownian motion. Model predictions were in reasonable agreement with respect to the particle size distribution and average agglomerate size when compared with dynamic light scattering (DLS) measurements for aqueous TiO(2), CeO(2), and C(60) nanoparticle suspensions over a wide range of pH (3-10) and ionic strength (0.01-156 mM). Simulations also demonstrated, in quantitative agreement with DLS measurements, that nanoparticle agglomerate size increased both with ionic strength and as the solution pH approached the isoelectric point (IEP). The present work suggests that the DSMC modeling approach, along with future use of an extended DLVO theory, has the potential for becoming a practical environmental analysis tool for predicting the agglomeration behavior of aqueous nanoparticle suspensions.

  16. Cluster Monte Carlo and numerical mean field analysis for the water liquid-liquid phase transition

    NASA Astrophysics Data System (ADS)

    Mazza, Marco G.; Stokely, Kevin; Strekalova, Elena G.; Stanley, H. Eugene; Franzese, Giancarlo

    2009-04-01

    Using Wolff's cluster Monte Carlo simulations and numerical minimization within a mean field approach, we study the low temperature phase diagram of water, adopting a cell model that reproduces the known properties of water in its fluid phases. Both methods allow us to study the thermodynamic behavior of water at temperatures, where other numerical approaches - both Monte Carlo and molecular dynamics - are seriously hampered by the large increase of the correlation times. The cluster algorithm also allows us to emphasize that the liquid-liquid phase transition corresponds to the percolation transition of tetrahedrally ordered water molecules.

  17. Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection

    SciTech Connect

    Billard, J.; Mayet, F.; Santos, D.

    2011-04-01

    Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.

  18. Monte Carlo analysis of dissociation and recombination behind strong shock waves in nitrogen

    NASA Technical Reports Server (NTRS)

    Boyd, I. D.

    1991-01-01

    Computations are presented for the relaxation zone behind strong, 1D shock waves in nitrogen. The analysis is performed with the direct simulation Monte Carlo method (DSMC). The DSMC code is vectorized for efficient use on a supercomputer. The code simulates translational, rotational and vibrational energy exchange and dissociative and recombinative chemical reactions. A model is proposed for the treatment of three body-recombination collisions in the DSMC technique which usually simulates binary collision events. The model improves previous models because it can be employed with a large range of chemical-rate data, does not introduce into the flow field troublesome pairs of atoms which may recombine upon further collision (pseudoparticles) and is compatible with the vectorized code. The computational results are compared with existing experimental data. It is shown that the derivation of chemical-rate coefficients must account for the degree of vibrational nonequilibrium in the flow. A nonequilibrium-chemistry model is employed together with equilibrium-rate data to compute the flow in several different nitrogen shock waves.

  19. A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data

    SciTech Connect

    Garner, James R; Whitaker, J Michael

    2013-01-01

    As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.

  20. Personalized Analysis by Validation of Monte Carlo for Application of Pathways in Cardioembolic Stroke.

    PubMed

    Xing, Zhangmin; Luan, Bin; Zhao, Ruiying; Li, Zhanbiao; Sun, Guojian

    2017-02-24

    BACKGROUND Cardioembolic stroke (CES), which causes 20% cause of all ischemic strokes, is associated with high mortality. Previous studies suggest that pathways play a critical role in the identification and pathogenesis of diseases. We aimed to develop an integrated approach that is able to construct individual networks of pathway cross-talk to quantify differences between patients with CES and controls. MATERIAL AND METHODS One biological data set E-GEOD-58294 was used, including 23 normal controls and 59 CES samples. We used individualized pathway aberrance score (iPAS) to assess pathway statistics of 589 Ingenuity Pathways Analysis (IPA) pathways. Random Forest (RF) classification was implemented to calculate the AUC of every network. These procedures were tested by Monte Carlo Cross-Validation for 50 bootstraps. RESULTS A total of 28 networks with AUC >0.9 were found between CES and controls. Among them, 3 networks with AUC=1.0 had the best performance for classification in 50 bootstraps. The 3 pathway networks were able to significantly identify CES versus controls, which showed as biomarkers in the regulation and development of CES. CONCLUSIONS This novel approach could identify 3 networks able to accurately classify CES and normal samples in individuals. This integrated application needs to be validated in other diseases.

  1. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    SciTech Connect

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakage frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.

  2. Parametric analysis of intercellular ice propagation during cryosurgery, simulated using monte carlo techniques.

    PubMed

    Stott, Shannon L; Irimia, Daniel; Karlsson, Jens O M

    2004-04-01

    A microscale theoretical model of intracellular ice formation (IIF) in a heterogeneous tissue volume comprising a tumor mass and surrounding normal tissue is presented. Intracellular ice was assumed to form either by intercellular ice propagation or by processes that are not affected by the presence of ice in neighboring cells (e.g., nucleation or mechanical rupture). The effects of cryosurgery on a 2D tissue consisting of 10(4) cells were simulated using a lattice Monte Carlo technique. A parametric analysis was performed to assess the specificity of IIF-related cell damage and to identify criteria for minimization of collateral damage to the healthy tissue peripheral to the tumor. Among the parameters investigated were the rates of interaction-independent IIF and intercellular ice propagation in the tumor and in the normal tissue, as well as the characteristic length scale of thermal gradients in the vicinity of the cryosurgical probe. Model predictions suggest gap junctional intercellular communication as a potential new target for adjuvant therapies complementing the cryosurgical procedure.

  3. Markov chain Monte Carlo linkage analysis: effect of bin width on the probability of linkage.

    PubMed

    Slager, S L; Juo, S H; Durner, M; Hodge, S E

    2001-01-01

    We analyzed part of the Genetic Analysis Workshop (GAW) 12 simulated data using Monte Carlo Markov chain (MCMC) methods that are implemented in the computer program Loki. The MCMC method reports the "probability of linkage" (PL) across the chromosomal regions of interest. The point of maximum PL can then be taken as a "location estimate" for the location of the quantitative trait locus (QTL). However, Loki does not provide a formal statistical test of linkage. In this paper, we explore how the bin width used in the calculations affects the max PL and the location estimate. We analyzed age at onset (AO) and quantitative trait number 5, Q5, from 26 replicates of the general simulated data in one region where we knew a major gene, MG5, is located. For each trait, we found the max PL and the corresponding location estimate, using four different bin widths. We found that bin width, as expected, does affect the max PL and the location estimate, and we recommend that users of Loki explore how their results vary with different bin widths.

  4. Error analysis and tolerance allocation for confocal scanning microscopy using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Yoo, Hongki; Kang, Dong-Kyun; Lee, SeungWoo; Lee, Junhee; Gweon, Dae-Gab

    2004-07-01

    The errors can cause the serious loss of the performance of a precision machine system. In this paper, we propose the method of allocating the alignment tolerances of the components and apply this method to Confocal Scanning Microscopy (CSM) to get the optimal tolerances. CSM uses confocal aperture, which blocks the out-of-focus information. Thus, it provides images with superior resolution and has unique property of optical sectioning. Recently, due to these properties, it has been widely used for measurement in biological field, medical science, material science and semiconductor industry. In general, tight tolerances are required to maintain the performance of a system, but a high cost of manufacturing and assembling is required to preserve the tight tolerances. The purpose of allocating the optimal tolerances is minimizing the cost while keeping the performance of the system. In the optimal problem, we set the performance requirements as constraints and maximized the tolerances. The Monte Carlo Method, a statistical simulation method, is used in tolerance analysis. Alignment tolerances of optical components of the confocal scanning microscopy are optimized, to minimize the cost and to maintain the observation performance of the microscopy. We can also apply this method to the other precision machine system.

  5. The use of Monte Carlo analysis for exposure assessment of an estuarine food web

    SciTech Connect

    Iannuzzi, T.J.; Shear, N.M.; Harrington, N.W.; Henning, M.H.

    1995-12-31

    Despite apparent agreement within the scientific community that probabilistic methods of analysis offer substantially more informative exposure predictions than those offered by the traditional point estimate approach, few risk assessments conducted or approved by state and federal regulatory agencies have used probabilistic methods. Among the likely deterrents to application of probabilistic methods to ecological risk assessment is the absence of ``standard`` data distributions that are considered applicable to most conditions for a given ecological receptor. Indeed, point estimates of ecological exposure factor values for a limited number of wildlife receptors have only recently been published. The Monte Carlo method of probabilistic modeling has received increasing support as a promising technique for characterizing uncertainty and variation in estimates of exposure to environmental contaminants. An evaluation of literature on the behavior, physiology, and ecology of estuarine organisms was conducted in order to identify those variables that most strongly influence uptake of xenobiotic chemicals from sediments, water and food sources. The ranges, central tendencies, and distributions of several key parameter values for polychaetes (Nereis sp.), mummichog (Fundulus heteroclitus), blue crab (Callinectes sapidus), and striped bass (Morone saxatilis) in east coast estuaries were identified. Understanding the variation in such factors, which include feeding rate, growth rate, feeding range, excretion rate, respiration rate, body weight, lipid content, food assimilation efficiency, and chemical assimilation efficiency, is critical to the understanding the mechanisms that control the uptake of xenobiotic chemicals in aquatic organisms, and to the ability to estimate bioaccumulation from chemical exposures in the aquatic environment.

  6. Monte Carlo analysis of the enhanced transcranial penetration using distributed near-infrared emitter array.

    PubMed

    Yue, Lan; Humayun, Mark S

    2015-08-01

    Transcranial near-infrared (NIR) treatment of neurological diseases has gained recent momentum. However, the low NIR dose available to the brain, which shows severe scattering and absorption of the photons by human tissues, largely limits its effectiveness in clinical use. Hereby, we propose to take advantage of the strong scattering effect of the cranial tissues by applying an evenly distributed multiunit emitter array on the scalp to enhance the cerebral photon density while maintaining each single emitter operating under the safe thermal limit. By employing the Monte Carlo method, we simulated the transcranial propagation of the array emitted light and demonstrated markedly enhanced intracranial photon flux as well as improved uniformity of the photon distribution. These enhancements are correlated with the source location, density, and wavelength of light. To the best of our knowledge, we present the first systematic analysis of the intracranial light field established by the scalp-applied multisource array and reveal a strategy for the optimization of the therapeutic effects of the NIR radiation.

  7. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  8. Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano

    2014-02-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 σ of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.

  9. Personalized Analysis by Validation of Monte Carlo for Application of Pathways in Cardioembolic Stroke

    PubMed Central

    Xing, Zhangmin; Luan, Bin; Zhao, Ruiying; Li, Zhanbiao; Sun, Guojian

    2017-01-01

    Background Cardioembolic stroke (CES), which causes 20% cause of all ischemic strokes, is associated with high mortality. Previous studies suggest that pathways play a critical role in the identification and pathogenesis of diseases. We aimed to develop an integrated approach that is able to construct individual networks of pathway cross-talk to quantify differences between patients with CES and controls. Material/Methods One biological data set E-GEOD-58294 was used, including 23 normal controls and 59 CES samples. We used individualized pathway aberrance score (iPAS) to assess pathway statistics of 589 Ingenuity Pathways Analysis (IPA) pathways. Random Forest (RF) classification was implemented to calculate the AUC of every network. These procedures were tested by Monte Carlo Cross-Validation for 50 bootstraps. Results A total of 28 networks with AUC >0.9 were found between CES and controls. Among them, 3 networks with AUC=1.0 had the best performance for classification in 50 bootstraps. The 3 pathway networks were able to significantly identify CES versus controls, which showed as biomarkers in the regulation and development of CES. Conclusions This novel approach could identify 3 networks able to accurately classify CES and normal samples in individuals. This integrated application needs to be validated in other diseases. PMID:28232661

  10. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    NASA Technical Reports Server (NTRS)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  11. Melanin and blood concentration in a human skin model studied by multiple regression analysis: assessment by Monte Carlo simulation.

    PubMed

    Shimada, M; Yamada, Y; Itoh, M; Yatagai, T

    2001-09-01

    Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.

  12. Melanin and blood concentration in a human skin model studied by multiple regression analysis: assessment by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Shimada, M.; Yamada, Y.; Itoh, M.; Yatagai, T.

    2001-09-01

    Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.

  13. A Refinement of Risk Analysis Procedures for Trichloroethylene Through the Use of Monte Carlo Method in Conjunction with Physiologically Based Pharmacokinetic Modeling

    DTIC Science & Technology

    1993-09-01

    This study refines risk analysis procedures for trichloroethylene (TCE) using a physiologically based pharmacokinetic (PBPK) model in conjunction...promulgate, and better present, more realistic standards.... Risk analysis , Physiologically based pharmacokinetics, Pbpk, Trichloroethylene, Monte carlo method.

  14. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis

    USDA-ARS?s Scientific Manuscript database

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...

  15. Computer program uses Monte Carlo techniques for statistical system performance analysis

    NASA Technical Reports Server (NTRS)

    Wohl, D. P.

    1967-01-01

    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.

  16. TH-C-12A-08: New Compact 10 MV S-Band Linear Accelerator: 3D Finite-Element Design and Monte Carlo Dose Simulations

    SciTech Connect

    Baillie, D; St Aubin, J; Fallone, B; Steciw, S

    2014-06-15

    Purpose: To design a new compact S-band linac waveguide capable of producing a 10 MV x-ray beam, while maintaining the length (27.5 cm) of current 6 MV waveguides. This will allow higher x-ray energies to be used in our linac-MRI systems with the same footprint. Methods: Finite element software COMSOL Multiphysics was used to design an accelerator cavity matching one published in an experiment breakdown study, to ensure that our modeled cavities do not exceed the threshold electric fields published. This cavity was used as the basis for designing an accelerator waveguide, where each cavity of the full waveguide was tuned to resonate at 2.997 GHz by adjusting the cavity diameter. The RF field solution within the waveguide was calculated, and together with an electron-gun phase space generated using Opera3D/SCALA, were input into electron tracking software PARMELA to compute the electron phase space striking the x-ray target. This target phase space was then used in BEAM Monte Carlo simulations to generate percent depth doses curves for this new linac, which were then used to re-optimize the waveguide geometry. Results: The shunt impedance, Q-factor, and peak-to-mean electric field ratio were matched to those published for the breakdown study to within 0.1% error. After tuning the full waveguide, the peak surface fields are calculated to be 207 MV/m, 13% below the breakdown threshold, and a d-max depth of 2.42 cm, a D10/20 value of 1.59, compared to 2.45 cm and 1.59, respectively, for the simulated Varian 10 MV linac and brehmsstrahlung production efficiency 20% lower than a simulated Varian 10 MV linac. Conclusion: This work demonstrates the design of a functional 27.5 cm waveguide producing 10 MV photons with characteristics similar to a Varian 10 MV linac.

  17. Monte Carlo estimation of scatter effects on quantitative myocardial blood flow and perfusable tissue fraction using 3D-PET and 15O-water

    NASA Astrophysics Data System (ADS)

    Hirano, Yoshiyuki; Koshino, Kazuhiro; Watabe, Hiroshi; Fukushima, Kazuhito; Iida, Hidehiro

    2012-11-01

    In clinical cardiac positron emission tomography using 15O-water, significant tracer accumulation is observed not only in the heart but also in the liver and lung, which are partially outside the field-of-view. In this work, we investigated the effects of scatter on quantitative myocardium blood flow (MBF) and perfusable tissue fraction (PTF) by a precise Monte Carlo simulation (Geant4) and a numerical human model. We assigned activities to the heart, liver, and lung of the human model with varying ratios of organ activities according to an experimental time activity curve and created dynamic sinograms. The sinogram data were reconstructed by filtered backprojection. By comparing a scatter-corrected image (SC) with a true image (TRUE), we evaluated the accuracy of the scatter correction. TRUE was reconstructed using a scatter-eliminated sinogram, which can be obtained only in simulations. A scatter-uncorrected image (W/O SC) and an attenuation-uncorrected image (W/O AC) were also constructed. Finally, we calculated MBF and PTF with a single tissue-compartment model for four types of images. As a result, scatter was corrected accurately, and MBFs derived from all types of images were consistent with the MBF obtained from TRUE. Meanwhile, the PTF of only the SC was in agreement with the PTF of TRUE. From the simulation results, we concluded that quantitative MBF is less affected by scatter and absorption in 3D-PET using 15O-water. However, scatter correction is essential for accurate PTF.

  18. Monte Carlo simulations of GeoPET experiments: 3D images of tracer distributions (18F, 124I and 58Co) in Opalinus clay, anhydrite and quartz

    NASA Astrophysics Data System (ADS)

    Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna

    2013-08-01

    Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.

  19. A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.

    PubMed

    McCandless, Lawrence C; Gustafson, Paul

    2017-04-06

    Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    NASA Astrophysics Data System (ADS)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  1. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    PubMed

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators.

  2. Identification of Thyroid Receptor Ant/Agonists in Water Sources Using Mass Balance Analysis and Monte Carlo Simulation

    PubMed Central

    Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia

    2013-01-01

    Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563

  3. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis.

    PubMed

    Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian

    2017-01-28

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  4. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  5. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE PAGES

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-31

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  6. A Monte Carlo approach to Beryllium-7 solar neutrino analysis with KamLAND

    NASA Astrophysics Data System (ADS)

    Grant, Christopher Peter

    Terrestrial measurements of neutrinos produced by the Sun have been of great interest for over half a century because of their ability to test the accuracy of solar models. The first solar neutrinos detected with KamLAND provided a measurement of the 8B solar neutrino interaction rate above an analysis threshold of 5.5 MeV. This work describes efforts to extend KamLAND's detection sensitivity to solar neutrinos below 1 MeV, more specifically, those produced with an energy of 0.862 MeV from the 7Be electron-capture decay. Many of the difficulties in measuring solar neutrinos below 1 MeV arise from backgrounds caused abundantly by both naturally occurring, and man-made, radioactive nuclides. The primary nuclides of concern were 210Bi, 85Kr, and 39Ar. Since May of 2007, the KamLAND experiment has undergone two separate purification campaigns. During both campaigns a total of 5.4 ktons (about 6440 m3) of scintillator was circulated through a purification system, which utilized fractional distillation and nitrogen purging. After the purification campaign, reduction factors of 1.5 x 103 for 210Bi and 6.5 x 10 4 for 85Kr were observed. The reduction of the backgrounds provided a unique opportunity to observe the 7Be solar neutrino rate in KamLAND. An observation required detailed knowledge of the detector response at low energies, and to accomplish this, a full detector Monte Carlo simulation, called KLG4sim, was utilized. The optical model of the simulation was tuned to match the detector response observed in data after purification, and the software was optimized for the simulation of internal backgrounds used in the 7Be solar neutrino analysis. The results of this tuning and estimates from simulations of the internal backgrounds and external backgrounds caused by radioactivity on the detector components are presented. The first KamLAND analysis based on Monte Carlo simulations in the energy region below 2 MeV is shown here. The comparison of the chi2 between the null

  7. Monte Carlo simulations for analysis and design of nuclear isomer experiments

    NASA Astrophysics Data System (ADS)

    Winick, Tristan; Goddard, Brian; Carroll, James

    2014-09-01

    The well-established GEANT4 Monte Carlo code was used to analyze the results from a test of bremsstrahlung-induced nuclear isomer switching and to guide development of an experiment to test nuclear excitation by electron capture (NEEC). Bremsstrahlung-induced experiments have historically been analyzed with the assumption that the photon flux of the bremsstrahlung spectrum at a given energy varies linearly with the spectrum's endpoint. The results obtained with GEANT4 suggest that this assumption is not justified; the revised function differs enough to warrant a re-analysis of the experimental data. This re-analysis has been applied to the switching of the unusually long-lived isomer of 180Ta (T1/2 > 1016 yr.), showing that the energies of its switching states differ by about 30 keV compared to those previously identified. GEANT4 was also employed in the design of a NEEC experiment to test the isomer switching of 93Mo via coupled atomic-nuclear processes. Initial work involved modeling a beam of 93Mo ions incident on a volume of 4He gas and observing the charge exchange process and associated emitted fluorescence. The beam and 4He volume, the ionization trails of the electrons liberated from the 4He atoms, and the subsequent fluorescence were successfully simulated; however, it was found that GEANT4 does not currently support ion charge exchange. Future work will entail either the development of the requisite code for GEANT4, or the use of a different model that can accurately simulate ion charge exchange.

  8. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    PubMed

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.

  9. Investing in a robotic milking system: a Monte Carlo simulation analysis.

    PubMed

    Hyde, J; Engel, P

    2002-09-01

    This paper uses Monte Carlo simulation methods to estimate the breakeven value for a robotic milking system (RMS) on a dairy farm in the United States. The breakeven value indicates the maximum amount that could be paid for the robots given the costs of alternative milking equipment and other important factors (e.g., milk yields, prices, length of useful life of technologies). The analysis simulates several scenarios under three herd sizes, 60, 120, and 180 cows. The base-case results indicate that the mean breakeven values are $192,056, $374,538, and $553,671 for each of the three progressively larger herd sizes. These must be compared to the per-unit RMS cost (about $125,000 to $150,000) and the cost of any construction or installation of other equipment that accompanies the RMS. Sensitivity analysis shows that each additional dollar spent on milking labor in the parlor increases the breakeven value by $4.10 to $4.30. Each dollar increase in parlor costs increases the breakeven value by $0.45 to $0.56. Also, each additional kilogram of initial milk production (under a 2x system in the parlor) decreases the breakeven by $9.91 to $10.64. Finally, each additional year of useful life for the RMS increases the per-unit breakeven by about $16,000 while increasing the life of the parlor by 1 yr decreases the breakeven value by between $5,000 and $6,000.

  10. Monte Carlo Analysis of Reservoir Models Using Seismic Data and Geostatistical Models

    NASA Astrophysics Data System (ADS)

    Zunino, A.; Mosegaard, K.; Lange, K.; Melnikova, Y.; Hansen, T. M.

    2013-12-01

    We present a study on the analysis of petroleum reservoir models consistent with seismic data and geostatistical constraints performed on a synthetic reservoir model. Our aim is to invert directly for structure and rock bulk properties of the target reservoir zone. To infer the rock facies, porosity and oil saturation seismology alone is not sufficient but a rock physics model must be taken into account, which links the unknown properties to the elastic parameters. We then combine a rock physics model with a simple convolutional approach for seismic waves to invert the "measured" seismograms. To solve this inverse problem, we employ a Markov chain Monte Carlo (MCMC) method, because it offers the possibility to handle non-linearity, complex and multi-step forward models and provides realistic estimates of uncertainties. However, for large data sets the MCMC method may be impractical because of a very high computational demand. To face this challenge one strategy is to feed the algorithm with realistic models, hence relying on proper prior information. To address this problem, we utilize an algorithm drawn from geostatistics to generate geologically plausible models which represent samples of the prior distribution. The geostatistical algorithm learns the multiple-point statistics from prototype models (in the form of training images), then generates thousands of different models which are accepted or rejected by a Metropolis sampler. To further reduce the computation time we parallelize the software and run it on multi-core machines. The solution of the inverse problem is then represented by a collection of reservoir models in terms of facies, porosity and oil saturation, which constitute samples of the posterior distribution. We are finally able to produce probability maps of the properties we are interested in by performing statistical analysis on the collection of solutions.

  11. Hierarchical Monte Carlo modeling with S-distributions: Concepts and illustrative analysis of mercury contamination in King Mackerel

    SciTech Connect

    Voit, E.O.; Balthis, W.L.; Holser, R.A.

    1995-12-31

    The quantitative assessment of environmental contaminants is a complex process. It involves nonlinear models and the characterization of variables, factors, and parameters that are distributed and dependent on each other. Assessments based on point estimates are easy to perform, but since they are unreliable, Monte Carlo simulations have become a standard procedure. Simulations pose two challenges: They require the numerical characterization of parameter distributions and they do not account for dependencies between parameters. This paper offers strategies for dealing with both challenges. The first part discusses the characterization of data with the S-distribution. This distribution offers several advantages, which include simplicity of numerical analysis, flexibility in shape, and easy computation of quantiles. The second part outlines how the S-distribution can be used for hierarchical Monte Carlo simulations. In these simulations the selection of parameter values occurs sequentially, and each choice depends on the parameter values selected before. The method is illustrated with preliminary simulation analyses that are concerned with mercury contamination in king mackerel (Scomberomorus cavalla). It is demonstrated that the results of such hierarchical simulations are generally different from those of traditional Monte Carlo simulations.

  12. Spray cooling simulation implementing time scale analysis and the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Kreitzer, Paul Joseph

    Spray cooling research is advancing the field of heat transfer and heat rejection in high power electronics. Smaller and more capable electronics packages are producing higher amounts of waste heat, along with smaller external surface areas, and the use of active cooling is becoming a necessity. Spray cooling has shown extremely high levels of heat rejection, of up to 1000 W/cm 2 using water. Simulations of spray cooling are becoming more realistic, but this comes at a price. A previous researcher has used CFD to successfully model a single 3D droplet impact into a liquid film using the level set method. However, the complicated multiphysics occurring during spray impingement and surface interactions increases computation time to more than 30 days. Parallel processing on a 32 processor system has reduced this time tremendously, but still requires more than a day. The present work uses experimental and computational results in addition to numerical correlations representing the physics occurring on a heated impingement surface. The current model represents the spray behavior of a Spraying Systems FullJet 1/8-g spray nozzle. Typical spray characteristics are indicated as follows: flow rate of 1.05x10-5 m3/s, normal droplet velocity of 12 m/s, droplet Sauter mean diameter of 48 microm, and heat flux values ranging from approximately 50--100 W/cm2 . This produces non-dimensional numbers of: We 300--1350, Re 750--3500, Oh 0.01--0.025. Numerical and experimental correlations have been identified representing crater formation, splashing, film thickness, droplet size, and spatial flux distributions. A combination of these methods has resulted in a Monte Carlo spray impingement simulation model capable of simulating hundreds of thousands of droplet impingements or approximately one millisecond. A random sequence of droplet impingement locations and diameters is generated, with the proper radial spatial distribution and diameter distribution. Hence the impingement, lifetime

  13. PROMSAR: A backward Monte Carlo spherical RTM for the analysis of DOAS remote sensing measurements

    NASA Astrophysics Data System (ADS)

    Palazzi, E.; Petritoli, A.; Giovanelli, G.; Kostadinov, I.; Bortoli, D.; Ravegnani, F.; Sackey, S. S.

    A correct interpretation of diffuse solar radiation measurements made by Differential Optical Absorption Spectroscopy (DOAS) remote sensors require the use of radiative transfer models of the atmosphere. The simplest models consider radiation scattering in the atmosphere as a single scattering process. More realistic atmospheric models are those which consider multiple scattering and their application is useful and essential for the analysis of zenith and off-axis measurements regarding the lowest layers of the atmosphere, such as the boundary layer. These are characterized by the highest values of air density and quantities of particles and aerosols acting as scattering nuclei. A new atmospheric model, PROcessing of Multi-Scattered Atmospheric Radiation (PROMSAR), which includes multiple Rayleigh and Mie scattering, has recently been developed at ISAC-CNR. It is based on a backward Monte Carlo technique which is very suitable for studying the various interactions taking place in a complex and non-homogeneous system like the terrestrial atmosphere. PROMSAR code calculates the mean path of the radiation within each layer in which the atmosphere is sub-divided taking into account the large variety of processes that solar radiation undergoes during propagation through the atmosphere. This quantity is then employed to work out the Air Mass Factor (AMF) of several trace gases, to simulate in zenith and off-axis configurations their slant column amounts and to calculate the weighting functions from which informations about the gas vertical distribution is obtained using inversion methods. Results from the model, simulations and comparisons with actual slant column measurements are presented and discussed.

  14. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    NASA Astrophysics Data System (ADS)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  15. Methods for modeling non-equilibrium degenerate statistics and quantum-confined scattering in 3D ensemble Monte Carlo transport simulations

    NASA Astrophysics Data System (ADS)

    Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.

    2016-12-01

    Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical

  16. Analysis of the lattice kinetic Monte Carlo method in systems with external fields

    NASA Astrophysics Data System (ADS)

    Lee, Young Ki; Sinno, Talid

    2016-12-01

    The lattice kinetic Monte Carlo (LKMC) method is studied in the context of Brownian particles subjected to drift forces, here principally represented by external fluid flow. LKMC rate expressions for particle hopping are derived that satisfy detailed balance at equilibrium while also providing correct dynamical trajectories in advective-diffusive situations. Error analyses are performed for systems in which collections of particles undergo Brownian motion while also being advected by plug and parabolic flows. We demonstrate how the flow intensity, and its associated drift force, as well as its gradient, each impact the accuracy of the method in relation to reference analytical solutions and Brownian dynamics simulations. Finally, we show how a non-uniform grid that everywhere retains full microscopic detail may be employed to increase the computational efficiency of lattice kinetic Monte Carlo simulations of particles subjected to drift forces arising from the presence of external fields.

  17. Analysis of single Monte Carlo methods for prediction of reflectance from turbid media

    PubMed Central

    Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

    2011-01-01

    Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

  18. Monte carlo analysis of two-photon fluorescence imaging through a scattering medium.

    PubMed

    Blanca, C M; Saloma, C

    1998-12-01

    The behavior of two-photon fluorescence imaging through a scattering medium is analyzed by use of the Monte Carlo technique. The axial and transverse distributions of the excitation photons in the focused Gaussian beam are derived for both isotropic and anisotropic scatterers at different numerical apertures and at various ratios of the scattering depth with the mean free path. The two-photon fluorescence profiles of the sample are determined from the square of the normalized excitation intensity distributions. For the same lens aperture and scattering medium, two-photon fluorescence imaging offers a sharper and less aberrated axial response than that of single-photon confocal fluorescence imaging. The contrast in the corresponding transverse fluorescence profile is also significantly higher. Also presented are results comparing the effects of isotropic and anisotropic scattering media in confocal reflection imaging. The convergence properties of the Monte Carlo simulation are also discussed.

  19. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    SciTech Connect

    Ivanov, A.; Sanchez, V.; Hoogenboom, J. E.

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  20. Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

    SciTech Connect

    Dupuis, Paul

    2014-03-14

    This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.

  1. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model.

    PubMed

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2013-07-01

    Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy. Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10(6) particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 μm. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4. The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER ≈ 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose

  2. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2013-07-15

    Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p

  3. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion

  4. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  5. Behavioral Analysis of Visitors to a Medical Institution's Website Using Markov Chain Monte Carlo Methods.

    PubMed

    Suzuki, Teppei; Tani, Yuji; Ogasawara, Katsuhiko

    2016-07-25

    Consistent with the "attention, interest, desire, memory, action" (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. In the case of the keyword "clinic name," the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword "clinic name and regional name," the

  6. Reassessing benzene risks using internal doses and Monte-Carlo uncertainty analysis.

    PubMed Central

    Cox, L A

    1996-01-01

    Human cancer risks from benzene have been estimated from epidemiological data, with supporting evidence from animal bioassay data. This article reexamines the animal-based risk assessments using physiologically based pharmacokinetic (PBPK) models of benzene metabolism in animals and humans. Internal doses (total benzene metabolites) from oral gavage experiments in mice are well predicted by the PBPK model. Both the data and the PBPK model outputs are also well described by a simple nonlinear (Michaelis-Menten) regression model, as previously used by Bailer and Hoel [Metabolite-based internal doses used in risk assessment of benzene. Environ Health Perspect 82:177-184 (1989)]. Refitting the multistage model family to internal doses changes the maximum-likelihood estimate (MLE) dose-response curve for mice from linear-quadratic to purely cubic, so that low-dose risk estimates are smaller than in previous risk assessments. In contrast to Bailer and Hoel's findings using interspecies dose conversion, the use of internal dose estimates for humans from a PBPK model reduces estimated human risks at low doses. Sensitivity analyses suggest that the finding of a nonlinear MLE dose-response curve at low doses is robust to changes in internal dose definitions and more consistent with epidemiological data than earlier risk models. A Monte-Carlo uncertainty analysis based on maximum-entropy probabilities and Bayesian conditioning is used to develop an entire probability distribution for the true but unknown dose-response function. This allows the probability of a positive low-dose slope to be quantified: It is about 10%. An upper 95% confidence limit on the low-dose slope of excess risk is also obtained directly from the posterior distribution and is similar to previous q1* values. This approach suggests that the excess risk due to benzene exposure may be nonexistent (or even negative) at sufficiently low doses. Two types of biological information about benzene effects

  7. JMCT Monte Carlo Simulation Analysis of BEAVRS and SG-III Shielding

    NASA Astrophysics Data System (ADS)

    Li, Deng; Gang, Li; Baoyin, Zhang; Danhua, Shangguan; Yan, Ma; Zehua, Hu; Yuanguang, Fu; Rui, Li; Dunfu, Shi; Xiaoli, Hu; Wei, Wang

    2017-09-01

    JMCT is a general purpose Mont Carlo neutron-photon-electron or coupled neutron/photon/electron transport code with a continuous energy and multigroup. The code has almost all functions of a general Monte Carlo code which include the various variance reduction techniques, the multi-level parallel computation of MPI and OpenMP, the domain decomposition and on-fly Doppler broadening, etc. Especially, JMCT supports the depletion calculation with TTA and CRAM methods. The input uses the CAD modelling and the calculated results use the visual output. The geometry zones, materials, tallies, depletion zones, memories and the period of random number are enough big for suit of various problems. This paper describes the application of the JMCT Monte Carlo code to the simulation of BEAVRS and SG-III shielding model. For BEAVRS model, the JMCT results of HZP status are almost the same with MC21, OpenMC and experiment. Also, we performed the coupled calculation of neutron transport and depletion in full power. The results of ten depletion steps are obtained, where the depletion regions exceed 1.5 million and 120 thousand processors to be used. Due to no coupled with thermal hydraulics, the result is only for reference. Finally, we performed the detail modelling for Chinese SG-III laser facility, where the anomalistic geometry bodies exceed 10 thousands. The flux distribution of the radiation shielding is obtain based on the mesh tally in case of Deuterium-Tritium fusion reaction. The high fidelity of JMCT has been shown.

  8. Image guided radiation therapy applications for head and neck, prostate, and breast cancers using 3D ultrasound imaging and Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Fraser, Danielle

    In radiation therapy an uncertainty in the delivered dose always exists because anatomic changes are unpredictable and patient specific. Image guided radiation therapy (IGRT) relies on imaging in the treatment room to monitor the tumour and surrounding tissue to ensure their prescribed position in the radiation beam. The goal of this thesis was to determine the dosimetric impact on the misaligned radiation therapy target for three cancer sites due to common setup errors; organ motion, tumour tissue deformation, changes in body habitus, and treatment planning errors. For this purpose, a novel 3D ultrasound system (Restitu, Resonant Medical, Inc.) was used to acquire a reference image of the target in the computed tomography simulation room at the time of treatment planning, to acquire daily images in the treatment room at the time of treatment delivery, and to compare the daily images to the reference image. The measured differences in position and volume between daily and reference geometries were incorporated into Monte Carlo (MC) dose calculations. The EGSnrc (National Research Council, Canada) family of codes was used to model Varian linear accelerators and patient specific beam parameters, as well as to estimate the dose to the target and organs at risk under several different scenarios. After validating the necessity of MC dose calculations in the pelvic region, the impact of interfraction prostate motion, and subsequent patient realignment under the treatment beams, on the delivered dose was investigated. For 32 patients it is demonstrated that using 3D conformal radiation therapy techniques and a 7 mm margin, the prescribed dose to the prostate, rectum, and bladder is recovered within 0.5% of that planned when patient setup is corrected for prostate motion, despite the beams interacting with a new external surface and internal tissue boundaries. In collaboration with the manufacturer, the ultrasound system was adapted from transabdominal imaging to neck

  9. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis.

    PubMed

    Yakubova, Galina; Kavetskiy, Aleksandr; Prior, Stephen A; Torbert, H Allen

    2017-10-01

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [minimal detectible level (MDL), sensitivity] for soil carbon measurement. The INS system model with best performance characteristics was determined based on MC simulation results. Measurements of MDL using an experimental prototype based on this model demonstrated good agreement with simulated data. This prototype will be used as the base engineering design for a new INS system. Copyright © 2017. Published by Elsevier Ltd.

  10. A Monte Carlo analysis of health risks from PCB-contaminated mineral oil transformer fires.

    PubMed

    Eschenroeder, A Q; Faeder, E J

    1988-06-01

    The objective of this study is the estimation of health hazards due to the inhalation of combustion products from accidental mineral oil transformer fires. Calculations of production, dispersion, and subsequent human intake of polychlorinated dibenzofurans (PCDFs) provide us with exposure estimates. PCDFs are believed to be the principal toxic products of the pyrolysis of polychlorinated biphenyls (PCBs) sometimes found as contaminants in transformer mineral oil. Cancer burdens and birth defect hazard indices are estimated from population data and exposure statistics. Monte Carlo-derived variational factors emphasize the statistics of uncertainty in the estimates of risk parameters. Community health issues are addressed and risks are found to be insignificant.

  11. Monte Carlo analysis of lobular gas-surface scattering in tubes applied to thermal transpiration

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Raquet, C. A.

    1972-01-01

    A model of rarefied gas flow in tubes was developed which combines a lobular distribution with diffuse reflection at the wall. The model with Monte Carlo techniques was used to explain previously observed deviations in the free molecular thermal transpiration ratio which suggest molecules can have a greater tube transmission probability in a hot-to-cold direction than in a cold-to-hot direction. The model yields correct magnitudes of transmission probability ratios for helium in Pyrex tubing (1.09 to 1.14), and some effects of wall-temperature distribution, tube surface roughness, tube dimensions, gas temperature, and gas molecular mass.

  12. Comparison of marker types and map assumptions using Markov chain Monte Carlo-based linkage analysis of COGA data.

    PubMed

    Sieh, Weiva; Basu, Saonli; Fu, Audrey Q; Rothstein, Joseph H; Scheet, Paul A; Stewart, William C L; Sung, Yun J; Thompson, Elizabeth A; Wijsman, Ellen M

    2005-12-30

    We performed multipoint linkage analysis of the electrophysiological trait ECB21 on chromosome 4 in the full pedigrees provided by the Collaborative Study on the Genetics of Alcoholism (COGA). Three Markov chain Monte Carlo (MCMC)-based approaches were applied to the provided and re-estimated genetic maps and to five different marker panels consisting of microsatellite (STRP) and/or SNP markers at various densities. We found evidence of linkage near the GABRB1 STRP using all methods, maps, and marker panels. Difficulties encountered with SNP panels included convergence problems and demanding computations.

  13. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    SciTech Connect

    McGraw, David; Hershey, Ronald L.

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  14. Sensitivity analysis of an asymmetric Monte Carlo beam model of a Siemens Primus accelerator.

    PubMed

    Schreiber, Eric C; Sawkey, Daren L; Faddegon, Bruce A

    2012-03-08

    The assumption of cylindrical symmetry in radiotherapy accelerator models can pose a challenge for precise Monte Carlo modeling. This assumption makes it difficult to account for measured asymmetries in clinical dose distributions. We have performed a sensitivity study examining the effect of varying symmetric and asymmetric beam and geometric parameters of a Monte Carlo model for a Siemens PRIMUS accelerator. The accelerator and dose output were simulated using modified versions of BEAMnrc and DOSXYZnrc that allow lateral offsets of accelerator components and lateral and angular offsets for the incident electron beam. Dose distributions were studied for 40 × 40 cm² fields. The resulting dose distributions were analyzed for changes in flatness, symmetry, and off-axis ratio (OAR). The electron beam parameters having the greatest effect on the resulting dose distributions were found to be electron energy and angle of incidence, as high as 5% for a 0.25° deflection. Electron spot size and lateral offset of the electron beam were found to have a smaller impact. Variations in photon target thickness were found to have a small effect. Small lateral offsets of the flattening filter caused significant variation to the OAR. In general, the greatest sensitivity to accelerator parameters could be observed for higher energies and off-axis ratios closer to the central axis. Lateral and angular offsets of beam and accelerator components have strong effects on dose distributions, and should be included in any high-accuracy beam model.

  15. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    NASA Astrophysics Data System (ADS)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  16. Excited Rotational States in Doped {4} He Clusters: a Diffusion Monte Carlo Analysis

    NASA Astrophysics Data System (ADS)

    Coccia, Emanuele

    2017-03-01

    We report an extension of diffusion Monte Carlo (DMC) to the calculation of the molecular rotational energies by means of the generalized, symmetry-adapted, imaginary-time correlation functions (SAITCFs) originally introduced in the reptation quantum Monte Carlo (RQMC) framework (Škrbić in J Phys Chem A 111:12749, 2007). We studied the a-type and b-type rotational lines of the CO(4 He)N clusters with N= 1-8 that correlate, in the dimer limit, with the end-over-end and free-rotor transitions. We compare the SAITCF-DMC results with accurate DVR (for the dimer case), RQMC and other DMC data, and with reference experimental findings (Surin in Phys Rev Lett 101:233401, 2008). A good agreement is generally found, but a systematic underestimation of the SAITCF-DMC rotational energies of the b-type series is observed. Sources of inaccuracy in our theoretical approach and in the computational protocol are discussed and analyzed in detail.

  17. Effect of the T-gate on the performance of recessed HEMTs. A Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Mateos, Javier; González, Tomás; Pardo, Daniel; Hoel, Virginie; Cappy, Alain

    1999-09-01

    A microscopic study of 0.1 µm recessed gate icons/Journals/Common/delta" ALT="delta" ALIGN="TOP"/>-doped AlInAs/GaInAs HEMTs has been performed by using a semiclassical Monte Carlo device simulation. The geometry and layer structure of the simulated HEMT is completely realistic, including recessed gate and icons/Journals/Common/delta" ALT="delta" ALIGN="TOP"/>-doping configuration. The usual T-gate technology is used to improve the device characteristics by reducing the gate resistance. For first time we take into account in the Monte Carlo simulations the effect of the T-gate and the dielectric used to passivate the device surface, which affects considerably the electric field distribution inside the device. The measured Id-Vds characteristics of a real device are favourably compared with the simulation results. When comparing the complete simulation with the case in which Poisson equation is solved only inside the semiconductor, we find that even if the static I-V characteristics remain practically unchanged, important differences appear in the dynamic and noise behaviour, reflecting the influence of an additional capacitance.

  18. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    SciTech Connect

    Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.

    2012-07-01

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

  19. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  20. Monte Carlo analysis of pion contribution to absorbed dose from Galactic cosmic rays

    NASA Astrophysics Data System (ADS)

    Aghara, S. K.; Blattnig, S. R.; Norbury, J. W.; Singleterry, R. C.

    2009-04-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV-GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  1. Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

    2009-01-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  2. Oxygen distribution in tumors: a qualitative analysis and modeling study providing a novel Monte Carlo approach.

    PubMed

    Lagerlöf, Jakob H; Kindblom, Jon; Bernhardt, Peter

    2014-09-01

    To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO2)]. A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO2), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed

  3. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  4. Monte Carlo based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2013-12-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The

  5. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    SciTech Connect

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the

  6. Förster resonance energy transfer and trapping in selected systems: analysis by Monte-Carlo simulation.

    PubMed

    Bojarski, P; Synak, A; Kułak, L; Rangelowa-Jankowska, S; Kubicki, A; Grobelna, B

    2012-01-01

    Monte-Carlo simulation method is described and applied as an efficient tool to analyze experimental data in the presence of energy transfer in selected systems, where the use of analytical approaches is limited or even impossible. Several numerical and physical problems accompanying Monte-Carlo simulation are addressed. It is shown that the Monte-Carlo simulation enables to obtain orientation factor in partly ordered systems and other important energy transfer parameters unavailable directly from experiments. It is shown how Monte-Carlo simulation can predict some important features of energy transport like its directional character in ordered media.

  7. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in process optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.

  8. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    PubMed

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  9. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  10. Asymptotic analysis of the spatial discretization of radiation absorption and re-emission in Implicit Monte Carlo

    SciTech Connect

    Densmore, Jeffery D.

    2011-02-20

    We perform an asymptotic analysis of the spatial discretization of radiation absorption and re-emission in Implicit Monte Carlo (IMC), a Monte Carlo technique for simulating nonlinear radiative transfer. Specifically, we examine the approximation of absorption and re-emission by a spatially continuous artificial-scattering process and either a piecewise-constant or piecewise-linear emission source within each spatial cell. We consider three asymptotic scalings representing (i) a time step that resolves the mean-free time, (ii) a Courant limit on the time-step size, and (iii) a fixed time step that does not depend on any asymptotic scaling. For the piecewise-constant approximation, we show that only the third scaling results in a valid discretization of the proper diffusion equation, which implies that IMC may generate inaccurate solutions with optically large spatial cells if time steps are refined. However, we also demonstrate that, for a certain class of problems, the piecewise-linear approximation yields an appropriate discretized diffusion equation under all three scalings. We therefore expect IMC to produce accurate solutions for a wider range of time-step sizes when the piecewise-linear instead of piecewise-constant discretization is employed. We demonstrate the validity of our analysis with a set of numerical examples.

  11. A Monte Carlo Power Analysis of Traditional Repeated Measures and Hierarchical Multivariate Linear Models in Longitudinal Data Analysis.

    PubMed

    Fang, Hua; Brooks, Gordon P; Rizzo, Maria L; Espy, Kimberly A; Barcikowski, Robert S

    2008-01-01

    The power properties of traditional repeated measures and hierarchical linear models have not been clearly determined in the balanced design for longitudinal studies in the current literature. A Monte Carlo power analysis of traditional repeated measures and hierarchical multivariate linear models are presented under three variance-covariance structures. Results suggest that traditional repeated measures have higher power than hierarchical linear models for main effects, but lower power for interaction effects. Significant power differences are also exhibited when power is compared across different covariance structures. Results also supplement more comprehensive empirical indexes for estimating model precision via bootstrap estimates and the approximate power for both main effects and interaction tests under standard model assumptions.

  12. Photoelectric Franck-Hertz experiment and its kinetic analysis by Monte Carlo simulation.

    PubMed

    Magyar, Péter; Korolov, Ihor; Donkó, Zoltán

    2012-05-01

    The electrical characteristics of a photoelectric Franck-Hertz cell are measured in argon gas over a wide range of pressure, covering conditions where elastic collisions play an important role, as well as conditions where ionization becomes significant. Photoelectron pulses are induced by the fourth harmonic UV light of a diode-pumped Nd:YAG laser. The electron kinetics, which is far more complex compared to the naive picture of the Franck-Hertz experiment, is analyzed via Monte Carlo simulation. The computations provide the electrical characteristics of the cell, the energy and velocity distribution functions, and the transport parameters of the electrons, as well as the rate coefficients of different elementary processes. A good agreement is obtained between the cell's measured and calculated electrical characteristics, the peculiarities of which are understood by the simulation studies.

  13. Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1991-01-01

    A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.

  14. A Monte Carlo template based analysis for air-Cherenkov arrays

    NASA Astrophysics Data System (ADS)

    Parsons, R. D.; Hinton, J. A.

    2014-04-01

    We present a high-performance event reconstruction algorithm: an Image Pixel-wise fit for Atmospheric Cherenkov Telescopes (ImPACT). The reconstruction algorithm is based around the likelihood fitting of camera pixel amplitudes to an expected image template. A maximum likelihood fit is performed to find the best-fit shower parameters. A related reconstruction algorithm has already been shown to provide significant improvements over traditional reconstruction for both the CAT and H.E.S.S. experiments. We demonstrate a significant improvement to the template generation step of the procedure, by the use of a full Monte Carlo air shower simulation in combination with a ray-tracing optics simulation to more accurately model the expected camera images. This reconstruction step is combined with an MVA-based background rejection.

  15. Qualitative analysis of irregular fields delivered with dual electron multileaf collimator: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Inyang, Samuel Okon; Chamberlain, Alan

    2016-03-01

    The use of a dual electron multileaf collimator (eMLC) to collimate therapeutic electron beam without the use of cutouts has been previously shown to be feasible. Further Monte Carlo simulations were performed in this study to verify the nature and appearance of the isodose distribution in water phantom of irregular electron beams delivered by the eMLC. Electron fields used in this study were selected to reflect those used in electron beam therapy. Results of this study show that the isodose distribution in a water phantom obtained from the simulation of irregular electron beams through the eMLC conforms to the pattern of the eMLC used in the delivery of the beam. It is therefore concluded that the dual eMLC could deliver isodose distributions reflecting the pattern of the eMLC field that was used in the delivery of the beam.

  16. Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1991-01-01

    A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.

  17. Monte Carlo minicell approach for a detailed MOX fuel-pin power profile analysis

    SciTech Connect

    Chang, G.S.; Ryskamp, J.M.

    1997-12-01

    The U.S. Department of Energy (DOE) is pursuing two options to dispose of surplus weapons-grade plutonium (WGPu). One option is to burn the WGPu in a mixed-oxide (MOX) fuel form in light water reactors (LWRs). A significant challenge is to demonstrate that the differences between the WG and reactor-grade (RG) MOX fuel are minimal, and therefore, the commercial MOX experience base is applicable. MOX fuel will be irradiated in the Advanced Test Reactor (ATR) to investigate this assertion. Detailed power distributions throughout the MOX pins are required to determine temperature distributions. The purpose of this work is to develop a new Monte Carlo procedure for accurately determining power distributions in fuel pins located in the ATR reflector. Conventional LWR methods are not appropriate because of the unique ATR geometry.

  18. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation

    PubMed Central

    Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun

    2015-01-01

    The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695

  19. A Monte Carlo analysis of the liquid xenon TPC as gamma ray telescope

    NASA Technical Reports Server (NTRS)

    Aprile, E.; Bolotnikov, A.; Chen, D.; Mukherjee, R.

    1992-01-01

    Extensive Monte Carlo modeling of a coded aperture x ray telescope based on a high resolution liquid xenon TPC has been performed. Results on efficiency, background reduction capability and source flux sensitivity are presented. We discuss in particular the development of a reconstruction algorithm for events with multiple interaction points. From the energy and spatial information, the kinematics of Compton scattering is used to identify and reduce background events, as well as to improve the detector response in the few MeV region. Assuming a spatial resolution of 1 mm RMS and an energy resolution of 4.5 percent FWHM at 1 MeV, the algorithm is capable of reducing by an order of magnitude the background rate expected at balloon altitude, thus significantly improving the telescope sensitivity.

  20. Analysis of quantum Monte Carlo dynamics for quantum adiabatic evolution in infinite-range spin systems

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-Ichi

    2011-03-01

    We analytically derive deterministic equations of order parameters such as spontaneous magnetization in infinite-range quantum spin systems obeying quantum Monte Carlo dynamics. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. We discuss several possible applications of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we argue the ground state searching for infinite-range random spin systems via quantum adiabatic evolution. We were financially supported by Grant-in-Aid for Scientific Research (C) of Japan Society for the Promotion of Science, No. 22500195.

  1. Magnetic force imaging of a chain of biogenic magnetite and Monte Carlo analysis of tip-particle interaction

    NASA Astrophysics Data System (ADS)

    Körnig, André; Hartmann, Markus A.; Teichert, Christian; Fratzl, Peter; Faivre, Damien

    2014-06-01

    Magnetotactic bacteria form chains of magnetite nanoparticles that serve the organism as navigation tools. The magnetic anisotropy of the superstructure makes the chain an ideal model to study the magnetic properties of such an organization. Magnetic force microscopy (MFM) is currently the technique of choice for the visualization of magnetic nanostructures, however it does not enable the quantitative measurement of magnetic properties, since the interactions between the MFM probe and the magnetic sample are complex and not yet fully understood. Here we present an MFM study of such a chain of biological magnetite nanoparticles. We combined experimental and theoretical (Monte Carlo simulation) analyses of the sample, and investigated the size and orientation of the magnetic moments of the single magnetic particles in the chain. MonteCarlo simulations were used to calculate the influence of the magnetic tip on the configuration of the sample. The advantage of this procedure is that analysis does not require any a priori knowledge of the properties of the sample. The magnetic properties of the tip and of the magnetosomes are indeed varied in the calculations until the phase profiles of the simulated MFM images achieve a best match with the experimental ones. We hope our results will open the doors towards a better quantification of MFM images, and possibly a better understanding of the biological process in situ.

  2. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  3. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  4. Program system for three-dimensional coupled Monte Carlo-deterministic shielding analysis with application to the accelerator-based IFMIF neutron source

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Fischer, U.

    2005-10-01

    A program system for three-dimensional coupled Monte Carlo-deterministic shielding analysis has been developed to solve problems with complex geometry and bulk shield by integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a coupling interface program. A newly-proposed mapping approach is implemented in the interface program to calculate the angular flux distribution from the scored Monte Carlo particle tracks and generate the boundary source file for the use of TORT. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between the results calculated by these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities.

  5. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    SciTech Connect

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-12-31

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  6. MED-3DMC: a new tool to generate 3D conformation ensembles of small molecules with a Monte Carlo sampling of the conformational space.

    PubMed

    Sperandio, Olivier; Souaille, Marc; Delfaud, François; Miteva, Maria A; Villoutreix, Bruno O

    2009-04-01

    Obtaining an efficient sampling of the low to medium energy regions of a ligand conformational space is of primary importance for getting insight into relevant binding modes of drug candidates, or for the screening of rigid molecular entities on the basis of a predefined pharmacophore or for rigid body docking. Here, we report the development of a new computer tool that samples the conformational space by using the Metropolis Monte Carlo algorithm combined with the MMFF94 van der Waals energy term. The performances of the program have been assessed on 86 drug-like molecules that resulted from an ADME/tox profiling applied on cocrystalized small molecules and were compared with the program Omega on the same dataset. Our program has also been assessed on the 85 molecules of the Astex diverse set. Both test sets show convincing performance of our program at sampling the conformational space.

  7. Monte Carlo optimization of sample dimensions of an 241Am Be source-based PGNAA setup for water rejects analysis

    NASA Astrophysics Data System (ADS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.

    2007-07-01

    The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  8. Combining the diffusion approximation and Monte Carlo modeling in analysis of diffuse reflectance spectra from human skin

    NASA Astrophysics Data System (ADS)

    Naglič, Peter; Vidovič, Luka; Milanič, Matija; Randeberg, Lise L.; Majaron, Boris

    2014-03-01

    Light propagation in highly scattering biological tissues is often treated in the so-called diffusion approximation (DA). Although the analytical solutions derived within the DA are known to be inaccurate near tissue boundaries and absorbing layers, their use in quantitative analysis of diffuse reflectance spectra (DRS) is quite common. We analyze the artifacts in assessed tissue properties which occur in fitting of numerically simulated DRS with the DA solutions for a three-layer skin model. In addition, we introduce an original procedure which significantly improves the accuracy of such an inverse analysis of DRS. This procedure involves a single comparison run of a Monte Carlo (MC) numerical model, yet avoids the need to implement and run an inverse MC. This approach is tested also in analysis of experimental DRS from human skin.

  9. Metabolic flux distribution analysis by 13C-tracer experiments using the Markov chain-Monte Carlo method.

    PubMed

    Yang, J; Wongsa, S; Kadirkamanathan, V; Billings, S A; Wright, P C

    2005-12-01

    Metabolic flux analysis using 13C-tracer experiments is an important tool in metabolic engineering since intracellular fluxes are non-measurable quantities in vivo. Current metabolic flux analysis approaches are fully based on stoichiometric constraints and carbon atom balances, where the over-determined system is iteratively solved by a parameter estimation approach. However, the unavoidable measurement noises involved in the fractional enrichment data obtained by 13C-enrichment experiment and the possible existence of unknown pathways prevent a simple parameter estimation method for intracellular flux quantification. The MCMC (Markov chain-Monte Carlo) method, which obtains intracellular flux distributions through delicately constructed Markov chains, is shown to be an effective approach for deep understanding of the intracellular metabolic network. Its application is illustrated through the simulation of an example metabolic network.

  10. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    PubMed

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast.

  11. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    NASA Astrophysics Data System (ADS)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  12. Dose Modification Factor Analysis of Multi-Lumen Brachytherapy Applicator with Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Williams, Eric Alan

    Multi-lumen applicators like the Contura (SenoRx, Inc.) are used in partial breast irradiation (PBI) brachytherapy in instances where asymmetric dose distributions are desired, for example, when the applicator surface-to-skin thickness is small (<7mm). In these instances, the air outside the patient and the lung act as a poor scattering medium, scattering less dose back into the breast tissue which affects the dose distribution. Many commercial treatment planning systems do not correct for tissue heterogeneity, which results in inaccuracies in the planned dose distribution. This deviation has been quantified as the dose modification factor (DMF), equal to the ratio of the dose rate at 1cm beyond the applicator surface, with homogenous medium, to the dose rate at 1cm with heterogeneous medium. This investigation intends to model the Contura applicator with the Monte Carlo N-Particle code (MCNP, Los Alamos National Labs), determine a DMF through simulation, and correlate to previous measurements. Taking all geometrical considerations into account, an accurate model of the Contura balloon applicator was created in MCNP. This model was used to run simulations of symmetric and asymmetric plans. The dose modification factor was found to be dependent on the simulated water phantom geometry, with cuboid geometry yielding a max DMF of 1.0664. The same measurements taken using a spherical geometry water phantom gave a DMF of 1.1221. It was also seen that the difference in DMF between symmetric and asymmetric plans using the Contura applicator is minimal.

  13. Monte Carlo analysis on probe performance for endoscopic diffuse optical spectroscopy of tubular organ

    NASA Astrophysics Data System (ADS)

    Zhang, Yunyao; Zhu, Jingping; Cui, Weiwen; Nie, Wei; Li, Jie; Xu, Zhenghong

    2015-03-01

    We investigated the performance of endoscopic diffuse optical spectroscopy probes with circular or linear fiber arrangements for tubular organ cancer detection. Probe performance was measured by penetration depth. A Monte Carlo model was employed to simulate light transport in the hollow cylinder that both emits and receives light from the inner boundary of the sample. The influence of fiber configurations and tissue optical properties on penetration depth was simulated. The results show that under the same condition, probes with circular fiber arrangement penetrate deeper than probes with linear fiber arrangement, and the difference between the two probes' penetration depth decreases with an increase in the 'distance between source and detector (SD)' and the radius of the probe. Other results show that the penetration depths and their differences both decrease with an increase in the absorption coefficient and the reduced scattering coefficient but remain constant with changes in the anisotropy factor. Moreover, the penetration depth was more affected by the absorption coefficient than the reduced scattering coefficient. It turns out that in NIR band, probes with linear fiber arrangements are more appropriate for diagnosing superficial cancers, whereas probes with circular fiber arrangements should be chosen for diagnosing adenocarcinoma. But in UV-VIS band, the two probe configurations exhibit nearly the same. These results are useful in guiding endoscopic diffuse optical spectroscopy-based diagnosis for esophageal, cervical, colorectal and other cancers.

  14. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    DOE PAGES

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less

  15. Analysis of large solid propellant rocket engine exhaust plumes using the direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.

    1984-01-01

    A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).

  16. Monte Carlo analysis for finite-temperature magnetism of Nd2Fe14B permanent magnet

    NASA Astrophysics Data System (ADS)

    Toga, Yuta; Matsumoto, Munehisa; Miyashita, Seiji; Akai, Hisazumi; Doi, Shotaro; Miyake, Takashi; Sakuma, Akimasa

    2016-11-01

    We investigate the effects of magnetic inhomogeneities and thermal fluctuations on the magnetic properties of a rare-earth intermetallic compound, Nd2Fe14B . The constrained Monte Carlo method is applied to a Nd2Fe14B bulk system to realize the experimentally observed spin reorientation and magnetic anisotropy constants KmA(m =1 ,2 ,4 ) at finite temperatures. Subsequently, it is found that the temperature dependence of K1A deviates from the Callen-Callen law, K1A(T ) ∝M (T) 3 , even above room temperature, TR˜300 K , when the Fe (Nd) anisotropy terms are removed to leave only the Nd (Fe) anisotropy terms. This is because the exchange couplings between Nd moments and Fe spins are much smaller than those between Fe spins. It is also found that the exponent n in the external magnetic field Hext response of barrier height FB=FB0(1-Hext/H0) n is less than 2 in the low-temperature region below TR, whereas n approaches 2 when T >TR , indicating the presence of Stoner-Wohlfarth-type magnetization rotation. This reflects the fact that the magnetic anisotropy is mainly governed by the K1A term in the T >TR region.

  17. IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng

    2014-11-01

    The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.

  18. Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut

    2017-01-01

    We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .

  19. Comparative Analysis of Nuclear Cross Sections in Monte Carlo Methods for Medical Physics Applications

    SciTech Connect

    Myers, Chris; Kirk, Bernadette Lugue; Leal, Luiz C

    2007-01-01

    The data used in two Monte Carlo (MC) codes, EGSnrc and MCNPX were compared and a majority of the data used in MCNPX was imported into EGSnrc. The effects of merging the data of the two codes were then examined. MCNPX was run using the ITS electron step algorithm and the default data libraries mcplib04 and el03. Two runs are made with EGSnrc. The first simulation uses the default PEGS cross section library. The second simulation utilizes the data imported from MCNPX. All energy threshold values and physics options are made to be identical. A simple case was created in both EGSnrc and MCNPX that calculates the radial depth dose from an isotropically radiating disc in water for various incident, monoenergetic photon and electron energies. Initial results show that much less central processing unit (cpu) time is required by the EGSnrc code for simulations involving large numbers of particles, primarily electrons, when compared to MCNPX. The detailed particle history files - ptrac and iwatch - are investigated to compare the number and types of events being simulated in order to determine the reasons for the run time differences

  20. Size and composition of membrane protein clusters predicted by Monte Carlo analysis.

    PubMed

    Goldman, Jacki; Andrews, Steven; Bray, Dennis

    2004-10-01

    Biological membranes contain a high density of protein molecules, many of which associate into two-dimensional microdomains with important physiological functions. We have used Monte Carlo simulations to examine the self-association of idealized protein species in two dimensions. The proteins have defined bond strengths and bond angles, allowing us to estimate the size and composition of the aggregates they produce at equilibrium. With a single species of protein, the extent of cluster formation and the sizes of individual clusters both increase in non-linear fashion, showing a "phase change" with protein concentration and bond strength. With multiple co-aggregating proteins, we find that the extent of cluster formation also depends on the relative proportions of participating species. For some lattice geometries, a stoichiometric excess of particular species depresses cluster formation and moreover distorts the composition of clusters that do form. Our results suggest that the self-assembly of microdomains might require a critical level of subunits and that for optimal co-aggregation, proteins should be present in the membrane in the correct stoichiometric ratios.

  1. Monte Carlo analysis of T1 pyrazine collisional vibrational relaxation: Evidence for supercollisions

    NASA Astrophysics Data System (ADS)

    Wu, Fei; Weisman, R. Bruce

    2000-06-01

    The collisional loss of vibrational energy from polyatomic molecules in triplet electronic states has been studied in new detail through a variant of the competitive radiationless decay (CRD) method. Experimental transient absorption kinetics for T1 pyrazine vapor in the presence of helium relaxer reveals the competition between unimolecular radiationless decay and collisional vibrational relaxation. These data have been simulated with Monte Carlo stochastic calculations equivalent to full master equation solutions that model the distribution of donor vibrational energies during relaxation. The simulations included energy-dependent processes of T1→S0 radiationless decay, Tn←T1 optical absorption, and collisional energy loss. The simulation results confirm earlier findings of energy loss tendencies that increase strongly for pyrazine vibrational energies above ˜2000 cm-1. It is also found that the experimental data are not accurately simulated over a range of relaxer pressures if a simple exponential step-size distribution function is used to model collisional energy changes. Improved simulations are obtained by including an additional, low-probability channel representing large energy changes. This second channel would represent "supercollisions," which have not previously been recognized in the vibrational relaxation of triplet state polyatomics.

  2. Mapping-Linked Quantitative Trait Loci Using Bayesian Analysis and Markov Chain Monte Carlo Algorithms

    PubMed Central

    Uimari, P.; Hoeschele, I.

    1997-01-01

    A Bayesian method for mapping linked quantitative trait loci (QTL) using multiple linked genetic markers is presented. Parameter estimation and hypothesis testing was implemented via Markov chain Monte Carlo (MCMC) algorithms. Parameters included were allele frequencies and substitution effects for two biallelic QTL, map positions of the QTL and markers, allele frequencies of the markers, and polygenic and residual variances. Missing data were polygenic effects and multi-locus marker-QTL genotypes. Three different MCMC schemes for testing the presence of a single or two linked QTL on the chromosome were compared. The first approach includes a model indicator variable representing two unlinked QTL affecting the trait, one linked and one unlinked QTL, or both QTL linked with the markers. The second approach incorporates an indicator variable for each QTL into the model for phenotype, allowing or not allowing for a substitution effect of a QTL on phenotype, and the third approach is based on model determination by reversible jump MCMC. Methods were evaluated empirically by analyzing simulated granddaughter designs. All methods identified correctly a second, linked QTL and did not reject the one-QTL model when there was only a single QTL and no additional or an unlinked QTL. PMID:9178021

  3. Applications of Monte Carlo methods for the analysis of MHTGR case of the VHTRC benchmark

    SciTech Connect

    Difilippo, F.C.

    1994-03-01

    Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanese (VHTRC) and Swiss (PROTEUS) facilities and an actual configuration of the PROTEUS Configuration 1 experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. The author has used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degrees}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.

  4. Beam steering uncertainty analysis for Risley prisms based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen

    2017-01-01

    The Risley-prism system is applied in imaging LADAR to achieve precision directing of laser beams. The image quality of LADAR is affected deeply by the laser beam steering quality of Risley prisms. The ray-tracing method was used to predict the pointing error. The beam steering uncertainty of Risley prisms was investigated through Monte Carlo simulation under the effects of rotation axis jitter and prism rotation error. Case examples were given to elucidate the probability distribution of pointing error. Furthermore, the effect of scan pattern on the beam steering uncertainty was also studied. It is found that the demand for the bearing rotational accuracy of the second prism is much more stringent than that of the first prism. Under the effect of rotation axis jitter, the pointing uncertainty in the field of regard is related to the altitude angle of the emerging beam, but it has no relationship with the azimuth angle. The beam steering uncertainty will be affected by the original phase if the scan pattern is a circle. The proposed method can be used to estimate the beam steering uncertainty of Risley prisms, and the conclusions will be helpful in the design and manufacture of this system.

  5. Time series analysis and Monte Carlo methods for eigenvalue separation in neutron multiplication problems

    SciTech Connect

    Nease, Brian R. Ueki, Taro

    2009-12-10

    A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.

  6. White light Fourier spectrometer: Monte Carlo noise analysis and test measurements

    NASA Astrophysics Data System (ADS)

    Stoykova, Elena; Ivanov, Branimir

    2007-06-01

    This work reports on investigation of the sensitivity of a Fourier-transform spectrometer to noise sources based on Monte-Carlo simulation of measurement of a single spectrum. Flexibility of this approach permits easily to imitate various noise contaminations of the interferograms and to obtain statistically reliable results for widely varying noise characteristics. More specifically, we evaluate the accuracy of restoration of a single absorption peak for the cases of an additive detection noise and the noise which adds a fluctuating component to the carrier frequency in the source and the measurement channel of the interferometer. Comparison of spectra of an etalon He-Ne source calculated from more than 200 measured interferograms with the true spectrum supports a hypothesis that the latter fluctuations have characteristics of a coloured noise. Taking into account that the signal-to-noise ratio in the Fourier spectroscopy is constantly increasing, we focus on limitations on the achievable accuracy of spectrum restoration that are set by this type of noise which modifies the shape of the recorded interferograms. We present also results of the test measurements of the spectrum of a laser diode chosen as a test source using a three-channel Fourier spectroscopic system based on a white-sourced Michelson interferometer realized with the Twyman-Green scheme. The obtained results exhibit that fluctuations in the current displacement of the movable mirror of the interferometer should remain below 20 nm to restore the absorption spectrum with acceptable accuracy, especially at higher frequency bandwidth of the fluctuations.

  7. Analysis of probabilistic short run marginal cost using Monte Carlo method

    SciTech Connect

    Gutierrez-Alcaraz, G.; Navarrete, N.; Tovar-Hernandez, J.H.; Fuerte-Esquivel, C.R.; Mota-Palomino, R.

    1999-11-01

    The structure of the Electricity Supply Industry is undergoing dramatic changes to provide new services options. The main aim of this restructuring is allowing generating units the freedom of selling electricity to anybody they wish at a price determined by market forces. Several methodologies have been proposed in order to quantify different costs associated with those new services offered by electrical utilities operating under a deregulated market. The new wave of pricing is heavily influenced by economic principles designed to price products to elastic market segments on the basis of marginal costs. Hence, spot pricing provides the economic structure for many of new services. At the same time, the pricing is influenced by uncertainties associated to the electric system state variables which defined its operating point. In this paper, nodal probabilistic short run marginal costs are calculated, considering as random variables the load, the production cost and availability of generators. The effect of the electrical network is evaluated taking into account linearized models. A thermal economic dispatch is used to simulate each operational condition generated by Monte Carlo method on small fictitious power system in order to assess the effect of the random variables on the energy trading. First, this is carry out by introducing each random variable one by one, and finally considering the random interaction of all of them.

  8. Monte Carlo analysis of thermal transpiration effects in capacitance diaphragm gauges with helicoidal baffle system

    NASA Astrophysics Data System (ADS)

    Vargas, M.; Wüest, M.; Stefanov, S.

    2012-05-01

    The Capacitance Diaphragm Gauge (CDG) is one of the most widely used vacuum gauges in low and middle vacuum ranges. This device consists basically of a very thin ceramic or metal diaphragm which forms one of the electrodes of a cap acitor. The pressure is determined by measuring the variation in the capacitance due to the deflection of the diaphragm caused by the pressure difference established across the membrane. In order to minimize zero drift, some CDGs are operated keeping the sensor at a higher temperature. This difference in the temperature between the sensor and the vacuum chamber makes the behaviour of the gauge non-linear due to thermal transpiration effects. This effect becomes more significant when we move from the transitional flow to the free molecular regime. Besides, CDGs may incorporate different baffle systems to avoid the condensation on the membrane or its contamination. In this work, the thermal transpiration effect on the behaviour of a rarefied gas and on the measurements in a CDG with a helicoidal baffle system is investigated by using the Direct Simulation Monte Carlo method (DSMC). The study covers the behaviour of the system under the whole range of rarefaction, from the continuum up to the free molecular limit and the results are compared with empirical results. Moreover, the influence of the boundary conditions on the thermal transpiration effects is investigated by using Maxwell boundary conditions.

  9. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    SciTech Connect

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.

  10. First passage time Markov chain analysis of rare events for kinetic Monte Carlo: double kink nucleation during dislocation glide

    NASA Astrophysics Data System (ADS)

    Deo, C. S.; Srolovitz, D. J.

    2002-09-01

    We describe a first passage time Markov chain analysis of rare events in kinetic Monte Carlo (kMC) simulations and demonstrate how this analysis may be used to enhance kMC simulations of dislocation glide. Dislocation glide is described by the kink mechanism, which involves double kink nucleation, kink migration and kink-kink annihilation. Double kinks that nucleate on straight dislocations are unstable at small kink separations and tend to recombine immediately following nucleation. A very small fraction (<0.001) of nucleating double kinks survive to grow to a stable kink separation. The present approach replaces all of the events that lead up to the formation of a stable kink with a simple numerical calculation of the time required for stable kink formation. In this paper, we treat the double kink nucleation process as a temporally homogeneous birth-death Markov process and present a first passage time analysis of the Markov process in order to calculate the nucleation rate of a double kink with a stable kink separation. We discuss two methods to calculate the first passage time; one computes the distribution and the average of the first passage time, while the other uses a recursive relation to calculate the average first passage time. The average first passage times calculated by both approaches are shown to be in excellent agreement with direct Monte Carlo simulations for four idealized cases of double kink nucleation. Finally, we apply this approach to double kink nucleation on a screw dislocation in molybdenum and obtain the rates for formation of stable double kinks as a function of applied stress and temperature. Equivalent kMC simulations are too inefficient to be performed using commonly available computational resources.

  11. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream.

    PubMed

    Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.

  12. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    SciTech Connect

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh

    2016-09-16

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.

  13. A Monte Carlo study comparing PIV, ULS and DWLS in the estimation of dichotomous confirmatory factor analysis.

    PubMed

    Nestler, Steffen

    2013-02-01

    We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non-normality (normal, moderately, and extremely non-normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.

  14. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    NASA Astrophysics Data System (ADS)

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution

  15. PDF Weaving - Linking Inventory Data and Monte Carlo Uncertainty Analysis in the Study of how Disturbance Affects Forest Carbon Storage

    NASA Astrophysics Data System (ADS)

    Healey, S. P.; Patterson, P.; Garrard, C.

    2014-12-01

    Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon

  16. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    SciTech Connect

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run with little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.

  17. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    SciTech Connect

    PADOVANI, ENRICO

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.

  18. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    SciTech Connect

    PADOVANI, ENRICO

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.

  19. Monte Carlo sensitivity analysis of EUV mask reflectivity and its impact on OPC accuracy

    NASA Astrophysics Data System (ADS)

    Chen, Yulu; Wood, Obert; Rankin, Jed; Gullikson, Eric; Meyer-Ilse, Julia; Sun, Lei; Qi, Zhengqing John; Goodwin, Francis; Kye, Jongwook

    2017-03-01

    Unlike optical masks which are transmissive optical elements, use of extreme ultraviolet (EUV) radiation requires a reflective mask structure - a multi-layer coating consisting of alternating layers of high-Z (wave impedance) and low-Z materials that provide enhanced reflectivity over a narrow wavelength band peaked at the Bragg wavelength.1 Absorber side wall angle, corner rounding,2 surface roughness,3 and defects4 affect mask performance, but even seemingly simple parameters like bulk reflectivity on mirror and absorber surfaces can have a profound influence on imaging. For instance, using inaccurate reflectivity values at small and large incident angles would diminish the benefits of source mask co-optimization (SMO) and result in larger than expected pattern shifts. The goal of our work is to calculate the variation in mask reflectivity due to various sources of inaccuracies using Monte Carlo simulations. Such calculation is necessary as small changes in the thickness and optical properties of the high-Z and low-Z materials can cause substantial variations in reflectivity. This is further complicated by undesirable intermixing between the two materials used to create the reflector.5 One of the key contributors to mask reflectivity fluctuation is identified to be the intermixing layer thickness. We also investigate the impacts on OPC when the wrong mask information is provided, and evaluate the deterioration of overlapping process window. For a hypothetical N7 via layer, the lack of accurate mask information costs 25% of the depth of focus at 5% exposure latitude. Our work would allow the determination of major contributors to mask reflectivity variation, drive experimental efforts of measuring such contributors, provide strategies to optimize mask reflectivity, and quantize the OPC errors due to imperfect mask modeling.

  20. Monte Carlo analysis of single fiber reflectance spectroscopy: photon path length and sampling depth.

    PubMed

    Kanick, S C; Robinson, D J; Sterenborg, H J C M; Amelink, A

    2009-11-21

    Single fiber reflectance spectroscopy is a method to noninvasively quantitate tissue absorption and scattering properties. This study utilizes a Monte Carlo (MC) model to investigate the effect that optical properties have on the propagation of photons that are collected during the single fiber reflectance measurement. MC model estimates of the single fiber photon path length (L(SF)) show excellent agreement with experimental measurements and predictions of a mathematical model over a wide range of optical properties and fiber diameters. Simulation results show that L(SF) is unaffected by changes in anisotropy (g epsilon [0.8, 0.9, 0.95]), but is sensitive to changes in phase function (Henyey-Greenstein versus modified Henyey-Greenstein). A 20% decrease in L(SF) was observed for the modified Henyey-Greenstein compared with the Henyey-Greenstein phase function; an effect that is independent of optical properties and fiber diameter and is approximated with a simple linear offset. The MC model also returns depth-resolved absorption profiles that are used to estimate the mean sampling depth (Z(SF)) of the single fiber reflectance measurement. Simulated data are used to define a novel mathematical expression for Z(SF) that is expressed in terms of optical properties, fiber diameter and L(SF). The model of sampling depth indicates that the single fiber reflectance measurement is dominated by shallow scattering events, even for large fibers; a result that suggests that the utility of single fiber reflectance measurements of tissue in vivo will be in the quantification of the optical properties of superficial tissues.

  1. Fundamentals of Monte Carlo

    SciTech Connect

    Wollaber, Allan Benton

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  2. Monte Carlo eikonal scattering

    NASA Astrophysics Data System (ADS)

    Gibbs, W. R.; Dedonder, J. P.

    2012-08-01

    Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.

  3. Monte Carlo simulation and analysis of proton energy-deposition patterns in the Bragg peak

    NASA Astrophysics Data System (ADS)

    González-Muñoz, Gloria; Tilly, Nina; Fernández-Varea, José M.; Ahnesjö, Anders

    2008-06-01

    The spatial pattern of energy depositions is crucial for understanding the mechanisms that modify the relative biological effectiveness of different radiation qualities. In this paper, we present data on energy-deposition properties of mono-energetic protons (1-20 MeV) and their secondary electrons in liquid water. Proton-impact ionization was described by means of the Hansen-Kocbach-Stolterfoht doubly differential cross section (DDCS), thus modelling both the initial energy and angle of the emitted electron. Excitation by proton impact was included to account for the contribution of this interaction channel to the electronic stopping power of the projectile. Proton transport was implemented assuming track-segment conditions, whereas electrons were followed down to 50 eV by the Monte Carlo code PENELOPE. Electron intra-track energy-deposition properties, such as slowing-down and energy-imparted spectra of electrons, were calculated. Furthermore, the use of DDCSs enabled the scoring of electron inter-track properties. We present novel results for 1, 5 and 20 MeV single-proton-track frequencies of distances between the nearest inter- (e--e-, e--H+) and intra-track (e--e-, e--H+, H+-H+) energy-deposition events. By setting a threshold energy of 17.5 eV, commonly employed as a surrogate to discriminate for elementary damage in the DNA, the variation in these frequencies was studied as well. The energy deposited directly by the proton represents a large amount of the total energy deposited along the track, but when an energy threshold is adopted the relative contribution of the secondary electrons becomes larger for increasing energy of the projectile. We found that the frequencies of closest energy-deposition events per nanometre decrease with proton energy, i.e. for lower proton energies a denser ionization occurs, following the trend of the characteristic LET curves. In conclusion, considering the energy depositions due to the delta electrons and at the core of the

  4. Monte Carlo simulation and analysis of proton energy-deposition patterns in the Bragg peak.

    PubMed

    González-Muñoz, Gloria; Tilly, Nina; Fernández-Varea, José M; Ahnesjö, Anders

    2008-06-07

    The spatial pattern of energy depositions is crucial for understanding the mechanisms that modify the relative biological effectiveness of different radiation qualities. In this paper, we present data on energy-deposition properties of mono-energetic protons (1-20 MeV) and their secondary electrons in liquid water. Proton-impact ionization was described by means of the Hansen-Kocbach-Stolterfoht doubly differential cross section (DDCS), thus modelling both the initial energy and angle of the emitted electron. Excitation by proton impact was included to account for the contribution of this interaction channel to the electronic stopping power of the projectile. Proton transport was implemented assuming track-segment conditions, whereas electrons were followed down to 50 eV by the Monte Carlo code PENELOPE. Electron intra-track energy-deposition properties, such as slowing-down and energy-imparted spectra of electrons, were calculated. Furthermore, the use of DDCSs enabled the scoring of electron inter-track properties. We present novel results for 1, 5 and 20 MeV single-proton-track frequencies of distances between the nearest inter- (e(-)-e(-), e(-)-H+) and intra-track (e(-)-e(-), e(-)-H+, H+-H+) energy-deposition events. By setting a threshold energy of 17.5 eV, commonly employed as a surrogate to discriminate for elementary damage in the DNA, the variation in these frequencies was studied as well. The energy deposited directly by the proton represents a large amount of the total energy deposited along the track, but when an energy threshold is adopted the relative contribution of the secondary electrons becomes larger for increasing energy of the projectile. We found that the frequencies of closest energy-deposition events per nanometre decrease with proton energy, i.e. for lower proton energies a denser ionization occurs, following the trend of the characteristic LET curves. In conclusion, considering the energy depositions due to the delta electrons and at the

  5. Clinical implementation of the Peregrine Monte Carlo dose calculations system for photon beam therapy

    SciTech Connect

    Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D

    1999-07-01

    PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.

  6. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  7. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  8. Direct Simulation Monte Carlo Calculations in Support of the Columbia Shuttle Orbiter Accident Investigation

    NASA Technical Reports Server (NTRS)

    Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.

    2003-01-01

    The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.

  9. 3-D topological signatures and a new discrimination method for single-electron events and 0νββ events in CdZnTe: A Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Zeng, Ming; Li, Teng-Lin; Cang, Ji-Rong; Zeng, Zhi; Fu, Jian-Qiang; Zeng, Wei-He; Cheng, Jian-Ping; Ma, Hao; Liu, Yi-Nong

    2017-06-01

    In neutrinoless double beta (0νββ) decay experiments, the diversity of topological signatures of different particles provides an important tool to distinguish double beta events from background events and reduce background rates. Aiming at suppressing the single-electron backgrounds which are most challenging, several groups have established Monte Carlo simulation packages to study the topological characteristics of single-electron events and 0νββ events and develop methods to differentiate them. In this paper, applying the knowledge of graph theory, a new topological signature called REF track (Refined Energy-Filtered track) is proposed and proven to be an accurate approximation of the real particle trajectory. Based on the analysis of the energy depositions along the REF track of single-electron events and 0νββ events, the REF energy deposition models for both events are proposed to indicate the significant differences between them. With these differences, this paper presents a new discrimination method, which, in the Monte Carlo simulation, achieved a single-electron rejection factor of 93.8±0.3 (stat.)% as well as a 0νββ efficiency of 85.6±0.4 (stat.)% with optimized parameters in CdZnTe.

  10. SU-E-T-644: QuAArC: A 3D VMAT QA System Based On Radiochromic Film and Monte Carlo Simulation of Log Files

    SciTech Connect

    Barbeiro, A.R.; Ureba, A.; Baeza, J.A.; Jimenez-Ortega, E.; Plaza, A. Leal; Linares, R.; Mateos, J.C.; Velazquez, S.

    2015-06-15

    Purpose: VMAT involves two main sources of uncertainty: one related to the dose calculation accuracy, and the other linked to the continuous delivery of a discrete calculation. The purpose of this work is to present QuAArC, an alternative VMAT QA system to control and potentially reduce these uncertainties. Methods: An automated MC simulation of log files, recorded during VMAT treatment plans delivery, was implemented in order to simulate the actual treatment parameters. The linac head models and the phase-space data of each Control Point (CP) were simulated using the EGSnrc/BEAMnrc MC code, and the corresponding dose calculation was carried out by means of BEAMDOSE, a DOSXYZnrc code modification. A cylindrical phantom was specifically designed to host films rolled up at different radial distances from the isocenter, for a 3D and continuous dosimetric verification. It also allows axial and/or coronal films and point measurements with several types of ion chambers at different locations. Specific software was developed in MATLAB in order to process and evaluate the dosimetric measurements, which incorporates the analysis of dose distributions, profiles, dose difference maps, and 2D/3D gamma index. It is also possible to obtain the experimental DVH reconstructed on the patient CT, by an optimization method to find the individual contribution corresponding to each CP on the film, taking into account the total measured dose, and the corresponding CP dose calculated by MC. Results: The QuAArC system showed high reproducibility of measurements, and consistency with the results obtained with the commercial system implemented in the verification of the evaluated treatment plans. Conclusion: A VMAT QA system based on MC simulation and high resolution dosimetry with film has been developed for treatment verification. It shows to be useful for the study of the real VMAT capabilities, and also for linac commissioning and evaluation of other verification devices.

  11. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve.

  12. Monte Carlo simulations of growth/decay rate constant ratios for small methanol clusters: Application to nucleation data analysis

    NASA Astrophysics Data System (ADS)

    Hale, Barbara; Wilemski, Gerald; Viets, Aaron

    2013-05-01

    The Bennett Monte Carlo technique and the potential of van Leeuwen and Smit are used to calculate growth/decay rate constant ratios for small model methanol clusters at 220K, 240K and 260K. Temperature scaling properties of the rate constant ratios are demonstrated at these temperatures. The Monte Carlo results are used to study heat release from subcritical cluster formation in adiabatic nucleation rate measurements and to determine corrected final temperatures and supersaturation ratios for the methanol data of Strey, Wagner, and Schmeling. The corrected T and S values provide experimental rates with improved scaling properties. Nucleation rates are also calculated from the Monte Carlo free energy differences for the model methanol clusters and demonstrate the same scaling.

  13. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation.

    PubMed

    Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent

    2015-09-14

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  14. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Messerly, Richard A.; Rowley, Richard L.; Knotts, Thomas A.; Wilding, W. Vincent

    2015-09-01

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  15. Using calibration constrained Monte Carlo analysis of alternative conceptual models in land use management of drained fens

    NASA Astrophysics Data System (ADS)

    Rossi, Pekka; Ala-aho, Pertti; Doherty, John; Kløve, Bjørn

    2013-04-01

    Quantification of groundwater model uncertainties is one of the key aspects when using models to direct land use or water management. An esker aquifer with a size of 90 km2 was studied to understand how the surrounding peatland forestry drainage, groundwater abstraction and climate variability can affect the aquifer groundwater level and the water levels of groundwater dependent lakes of the area. Aquifer was studied with steady state groundwater models using three alternative conceptual geological models of the esker and running calibration constrained Null Space Monte Carlo uncertainty analysis and linear analysis to each model. This kind of simulation approach has not been used in peatland management previously. Models and analyses were used to observe effects of different land use scenarios, e.g. peatland drainage restoration or water abstraction for a nearby city, and climate variability. Data from the models and analyses give the decision makers insight of how different management practices in peatlands can affect the groundwater system given the uncertainties arising from the geological understanding, hydrological measurements, and model conceptualization. Results from the models can be used, for example, to pinpoint restoration or conservation of specific peatland drainage areas in which the models suggest clearest connection to aquifer water level.

  16. Risk Assessment and Prediction of Flyrock Distance by Combined Multiple Regression Analysis and Monte Carlo Simulation of Quarry Blasting

    NASA Astrophysics Data System (ADS)

    Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh

    2016-09-01

    Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.

  17. Monte Carlo simulation of Li+ motion in polyethylene based on polarization energy calculations and informed by data compression analysis.

    PubMed

    Scarle, S; Sterzel, M; Eilmes, A; Munn, R W

    2005-10-15

    We present an n-fold way kinetic Monte Carlo simulation of the hopping motion of Li+ ions in polyethylene on a grid of mesh 0.36 A superimposed on the voids of the rigid polymer. The structure of the polymer is derived from a higher-order simulation, and the energy of the ion at each site is derived by the self-consistent polarization field method. The ion motion evolves in time from free flight through anomalous diffusion to normal diffusion, with the average energy tending to decrease with increasing temperature through thermal annealing. We compare the results with those of hopping models with probabilistic energy distributions of increasing complexity by analyzing the mean-square displacement and the average energy of an ensemble of ions. The Gumbel distribution describes the ion energy statistics in this system better than the usual Gaussian distribution does; including energy correlation greatly affects the ion dynamics. The analysis uses the standard data compression program GZIP, which proves to be a powerful tool for data analysis by giving a measure of recurrences in the ion path.

  18. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    PubMed

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  19. Electronic structure quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bajdich, Michal; Mitas, Lubos

    2009-04-01

    Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with

  20. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF).

    PubMed

    Hansson, Marie; Isaksson, Mats

    2007-04-07

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  1. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners.

    PubMed

    Sweeney, Lisa M; Parker, Ann; Haber, Lynne T; Tran, C Lang; Kuempel, Eileen D

    2013-06-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model.

  2. Increased risk of orofacial clefts associated with maternal obesity: case–control study and Monte Carlo-based bias analysis

    PubMed Central

    Stott-Miller, Marni; Heike, Carrie L.; Kratz, Mario; Starr, Jacqueline R.

    2010-01-01

    Summary Our objective was to evaluate whether infants born to obese or diabetic women are at higher risk of non-syndromic orofacial clefting. We conducted a population-based case–control study using Washington State birth certificate and hospitalisation data for the years 1987–2005. Cases were infants born with orofacial clefts (n = 2153) and controls infants without orofacial clefts (n = 18 070). The primary exposures were maternal obesity (body mass index ≥30) and diabetes (either pre-existing or gestational). We estimated adjusted odds ratios (ORs) to compare, for mothers of cases and controls, the proportions of obese vs. normal-weight women and diabetic vs. non-diabetic women. We additionally performed Monte Carlo-based simulation analysis to explore possible influences of biases. Obese women had a small increased risk of isolated orofacial clefts in their offspring compared with normal-body mass index women [adjusted OR 1.26; 95% confidence interval 1.03, 1.55]. Results were similar regardless of type of cleft. Bias analyses suggest that estimates may represent underlying ORs of stronger magnitude. Results for diabetic women were highly imprecise and inconsistent. We and others have observed weak associations of similar magnitude between maternal obesity and risk of nonsyndromic orofacial clefts. These results could be due to bias or residual confounding. However, it is also possible that these results represent a stronger underlying association. More precise exposure measurement could help distinguish between these two possibilities. PMID:20670231

  3. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis

    PubMed Central

    Sattar, Ahmed M.A.; Raslan, Yasser M.

    2013-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude. PMID:25685476

  4. Identification of layers in optical coherence tomography of skin: comparative analysis of experimental and Monte Carlo simulated images.

    PubMed

    Shlivko, I L; Kirillin, M Yu; Donchenko, E V; Ellinsky, D O; Garanina, O E; Neznakhina, M S; Agrba, P D; Kamensky, V A

    2015-11-01

    The goal of the study is comparative analysis of the layers in OCT images and the morphological structure of skin with thick and thin epidermis. We analyzed the difference between skin with thin and thick epidermis in two ways. The first approach consisted in determination of the thicknesses of layers of skin with thin and thick epidermis of different localizations from experimental OCT images. The second approach was to develop numerical models fitting experimental OCT images based on Monte Carlo simulations revealing structure and optical parameters of layers of skin with thick and thin epidermis. The correspondence between the OCT images of skin with thin and thick epidermis and the morphological structure was confirmed. OCT images of healthy skin comprise three layers in case of skin with thin epidermis and four layers in skin with thick epidermis. The OCT image of the zone of the transition from skin with thick to skin with thin epidermis features five layers. The revealed differences in the structure of horny and cellular layers of epidermis, as well as of papillary and reticular dermis in skin with thin and thick epidermis specify different optical properties of these layers in OCT images. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Is adding more indicators to a latent class analysis beneficial or detrimental? Results of a Monte-Carlo study.

    PubMed

    Wurpts, Ingrid C; Geiser, Christian

    2014-01-01

    The purpose of this study was to examine in which way adding more indicators or a covariate influences the performance of latent class analysis (LCA). We varied the sample size (100 ≤ N ≤ 2000), number, and quality of binary indicators (between 4 and 12 indicators with conditional response probabilities of [0.3, 0.7], [0.2, 0.8], or [0.1, 0.9]), and the strength of covariate effects (zero, small, medium, large) in a Monte Carlo simulation study of 2- and 3-class models. The results suggested that in general, a larger sample size, more indicators, a higher quality of indicators, and a larger covariate effect lead to more converged and proper replications, as well as fewer boundary parameter estimates and less parameter bias. Furthermore, interactions among these study factors demonstrated how using more or higher quality indicators, as well as larger covariate effect size, could sometimes compensate for small sample size. Including a covariate appeared to be generally beneficial, although the covariate parameters themselves showed relatively large bias. Our results provide useful information for practitioners designing an LCA study in terms of highlighting the factors that lead to better or worse performance of LCA.

  6. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners

    PubMed Central

    Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.

    2015-01-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101

  7. Monte Carlo simulations of subsurface analysis of painted layers in micro-scale spatially offset Raman spectroscopy.

    PubMed

    Matousek, Pavel; Conti, Claudia; Colombo, Chiara; Realini, Marco

    2015-09-01

    A recently developed micrometer-scale spatially offset Raman spectroscopy (micro-SORS) method provides a new analytical capability for investigating nondestructively the chemical composition of subsurface, micrometer-scale-thick, diffusely scattering layers at depths beyond the reach of conventional confocal Raman microscopy. Here we provide, for the first time, the theoretical foundations for the micro-SORS defocusing concept based on Monte Carlo simulations. Specifically, we investigate a defocusing variant of micro-SORS that we used in our recent proof-of-concept study in conditions involving thin, diffusely scattering layers on top of an extended, diffusely scattering substrate. This configuration is pertinent, for example, for the subsurface analysis of painted layers in cultural heritage studies. The depth of the origin of Raman signal and the relative micro-SORS enhancement of the sublayer signals reached are studied as a function of layer thickness, sample photon transport length, and absorption. The model predicts that sublayer enhancement initially rapidly increases with increasing defocusing, ultimately reaching a plateau. The magnitude of the enhancement was found to be larger for thicker layers. The simulations also indicate that the penetration depths of micro-SORS can be between one and two orders of magnitude larger than those reached using conventional confocal Raman microscopy. The model provides a deeper insight into the underlying Raman photon migration mechanisms permitting the more effective optimization of experimental conditions for specific sample parameters.

  8. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis.

    PubMed

    Sattar, Ahmed M A; Raslan, Yasser M

    2014-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude.

  9. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    SciTech Connect

    Jewell, J. B.; O'Dwyer, I. J.; Huey, Greg; Gorski, K. M.; Eriksen, H. K.; Wandelt, B. D. E-mail: h.k.k.eriksen@astro.uio.no

    2009-05-20

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C {sub l}, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of {chi}{sup 2} between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  10. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: the Ti4O7 Magnéli phase

    DOE PAGES

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...

    2016-06-07

    The Magnéli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. In this paper, we have examined three low-lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate quantum Monte Carlo methods. We compare our results to those obtained from density functional theory-based methods that include approximate corrections for exchange and correlation. Ourmore » results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Finally, a detailed analysis suggests that non-local exchange–correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less

  11. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: The Ti4O7 Magneli phase

    DOE PAGES

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...

    2016-06-07

    The Magneli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation. Our resultsmore » confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less

  12. Fermion Monte Carlo

    SciTech Connect

    Kalos, M. H.; Pederiva, F.

    1998-12-01

    We review the fundamental challenge of fermion Monte Carlo for continuous systems, the "sign problem". We seek that eigenfunction of the many-body Schriodinger equation that is antisymmetric under interchange of the coordinates of pairs of particles. We describe methods that depend upon the use of correlated dynamics for pairs of correlated walkers that carry opposite signs. There is an algorithmic symmetry between such walkers that must be broken to create a method that is both exact and as effective as for symmetric functions, In our new method, it is broken by using different "guiding" functions for walkers of opposite signs, and a geometric correlation between steps of their walks, With a specific process of cancellation of the walkers, overlaps with antisymmetric test functions are preserved. Finally, we describe the progress in treating free-fermion systems and a fermion fluid with 14 3He atoms.

  13. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  14. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence.

    PubMed

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html].

  15. Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model

    ERIC Educational Resources Information Center

    Owens, Corina M.

    2011-01-01

    Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…

  16. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence

    PubMed Central

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The “missing heritability” has been suggested to be due to lack of studies focused on epistasis, also called gene–gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called “Epistasis Test in Meta-Analysis” (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene–gene interactions in the renin–angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html]. PMID:27045371

  17. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE PAGES

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-05-08

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  18. Monte Carlo uncertainty analysis of a diffusion model for the assessment of halogen gas exposure during dosing of brominators.

    PubMed

    Shade, W D; Jayjock, M A

    1997-06-01

    Monte Carlo simulation was incorporated into a diffusion-based exposure assessment model for the estimation of worker exposure to halogen gases during dosing of 500-lb sacks of a bromine-based biocide (BCDMH) into brominators. Indoor and outdoor dosing scenarios were modeled for small and large brominators. The diffusion model used describes a concentration gradient of halogen as a function of distance and time from the source instead of ascribing worst-case single point value estimates to the variables used in the diffusion model. Monte Carlo simulation was used to describe a distribution of values for each appropriate model variable. Using a personal computer and Monte Carlo simulation software, 10,000 iterations of the diffusion model were performed for four different dosing scenarios using random and independent samples from the distributions entered. The corresponding output distributions of predicted exposures were then calculated and displayed graphically for each scenario. The results of the Monte Carlo simulation predict that outdoor dosing of either small or large brominators with BCDMH is highly unlikely to result in an exceedance of the working occupational exposure limit for total halogen. In most ambient wind speed conditions, diffusion prevents appreciable airborne exposure to workers in the immediate vicinity of the brominator. Although relatively uncommon, dosing of brominators indoors in the assumed absence of local exhaust ventilation may generate airborne concentrations of total halogen that exceed the working short-term occupational exposure limit. Although very limited and inconclusive, field trial monitoring of BCDMH transfer operations indoors resulted in halogen concentrations well within the distribution of concentrations predicted by the Monte Carlo simulation of the diffusion model.

  19. Thermal and second-law analysis of a micro- or nanocavity using direct-simulation Monte Carlo.

    PubMed

    Mohammadzadeh, Alireza; Roohi, Ehsan; Niazmand, Hamid; Stefanov, Stefan; Myong, Rho Shin

    2012-05-01

    In this study the direct-simulation Monte Carlo (DSMC) method is utilized to investigate thermal characteristics of micro- or nanocavity flow. The rarefied cavity flow shows unconventional behaviors which cannot be predicted by the Fourier law, the constitutive relation for the continuum heat transfer. Our analysis in this study confirms some recent observations and shows that the gaseous flow near the top-left corner of the cavity is in a strong nonequilibrium state even within the early slip regime, Kn=0.005. As we obtained slip velocity and temperature jump on the driven lid of the cavity, we reported meaningful discrepancies between the direct and macroscopic sampling of rarefied flow properties in the DSMC method due to existence of nonequilibrium effects in the corners of cavity. The existence of unconventional nonequilibrium heat transfer mechanisms in the middle of slip regime, Kn=0.05, results in the appearance of cold-to-hot heat transfer in the microcavity. In the current study we demonstrate that existence of such unconventional heat transfer is strongly dependent on the Reynolds number and it vanishes in the large values of the lid velocity. As we compared DSMC solution with the results of regularized 13 moments (R13) equations, we showed that the thermal characteristic of the microcavity obtained by the R13 method coincides with the DSMC prediction. Our investigation also includes the analysis of molecular entropy in the microcavity to explain the heat transfer mechanism with the aid of the second law of thermodynamics. To this aim, we obtained the two-dimensional velocity distribution functions to report the molecular-based entropy distribution, and show that the cold-to-hot heat transfer in the cavity is well in accordance with the second law of thermodynamics and takes place in the direction of increasing entropy. At the end we introduce the entropy density for the rarefied flow and show that it can accurately illustrate departure from the

  20. Parallel tempering Monte Carlo combined with clustering Euclidean metric analysis to study the thermodynamic stability of Lennard-Jones nanoclusters

    NASA Astrophysics Data System (ADS)

    Cezar, Henrique M.; Rondina, Gustavo G.; Da Silva, Juarez L. F.

    2017-02-01

    A basic requirement for an atom-level understanding of nanoclusters is the knowledge of their atomic structure. This understanding is incomplete if it does not take into account temperature effects, which play a crucial role in phase transitions and changes in the overall stability of the particles. Finite size particles present intricate potential energy surfaces, and rigorous descriptions of temperature effects are best achieved by exploiting extended ensemble algorithms, such as the Parallel Tempering Monte Carlo (PTMC). In this study, we employed the PTMC algorithm, implemented from scratch, to sample configurations of LJn (n =38 , 55, 98, 147) particles at a wide range of temperatures. The heat capacities and phase transitions obtained with our PTMC implementation are consistent with all the expected features for the LJ nanoclusters, e.g., solid to solid and solid to liquid. To identify the known phase transitions and assess the prevalence of various structural motifs available at different temperatures, we propose a combination of a Leader-like clustering algorithm based on a Euclidean metric with the PTMC sampling. This combined approach is further compared with the more computationally demanding bond order analysis, typically employed for this kind of problem. We show that the clustering technique yields the same results in most cases, with the advantage that it requires no previous knowledge of the parameters defining each geometry. Being simple to implement, we believe that this straightforward clustering approach is a valuable data analysis tool that can provide insights into the physics of finite size particles with few to thousand atoms at a relatively low cost.

  1. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    NASA Astrophysics Data System (ADS)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model

  2. Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations

    SciTech Connect

    Liu, Xiang-Yang; Andersson, Anders David

    2016-11-07

    In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.

  3. A Monte Carlo Investigation of the Analysis of Variance Applied to Non-Independent Bernoulli Variates.

    ERIC Educational Resources Information Center

    Draper, John F., Jr.

    The applicability of the Analysis of Variance, ANOVA, procedures to the analysis of dichotomous repeated measure data is described. The design models for which data were simulated in this investigation were chosen to represent simple cases of two experimental situations: situation one, in which subjects' responses to a single randomly selected set…

  4. Monte Carlo Criticality Analysis of Simple Geometrics COntaining Tungsten Rhenium Alloys Engrained with Uranium Dioxide and Uranium Mononitride

    SciTech Connect

    Jonathan A. Webb; Indrajit Charit

    2011-08-01

    The critical mass and dimensions of simple geometries containing highly enriched uraniumdioxide (UO2) and uraniummononitride (UN) encapsulated in tungsten-rhenium alloys are determined using MCNP5 criticality calculations. Spheres as well as cylinders with length to radius ratios of 1.82 are computationally built to consist of 60 vol.% fuel and 40 vol.% metal matrix. Within the geometries the uranium is enriched to 93 wt.% uranium-235 and the rhenium content within the metal alloy was modeled over a range of 0 to 30 at.%. The spheres containing UO2 were determined to have a critical radius of 18.29 cm to 19.11 cm and a critical mass ranging from 366 kg to 424 kg. The cylinders containing UO2 were found to have a critical radius ranging from 17.07 cm to 17.844 cm with a corresponding critical mass of 406 kg to 471 kg. Spheres engrained with UN were determined to have a critical radius ranging from 14.82 cm to 15.19 cm and a critical mass between 222 kg and 242 kg. Cylinders which were engrained with UN were determined to have a critical radius ranging from 13.811 cm to 14.155 cm with a corresponding critical mass of 245 kg to 267 kg. The critical geometries were also computationally submerged in a neutronaically infinite medium of fresh water to determine the effects of rhenium addition on criticality accidents due to water submersion. The monte carlo analysis demonstrated that rhenium addition of up to 30 at.% can reduce the excess reactivity due to water submersion by up to $5.07 for UO2 fueled cylinders, $3.87 for UO2 fueled spheres and approximately $3.00 for UN fueled spheres and cylinders.

  5. Leasing policy and the rate of petroleum development: analysis with a Monte Carlo simulation model

    SciTech Connect

    Abbey, D; Bivins, R

    1982-03-01

    The study has two objectives: first, to consider whether alternative leasing systems are desirable to speed the rate of oil and gas exploration and development in frontier basins; second, to evaluate the Petroleum Activity and Decision Simulation model developed by the US Department of the Interior for economic and land use planning and for policy analysis. Analysis of the model involved structural variation of the geology, exploration, and discovery submodels and also involved a formal sensitivity analysis using the Latin Hypercube Sampling Method. We report the rate of exploration, discovery, and petroleum output under a variety of price, leasing policy, and tax regimes.

  6. BOOTSTRAPPING AND MONTE CARLO METHODS OF POWER ANALYSIS USED TO ESTABLISH CONDITION CATEGORIES FOR BIOTIC INDICES

    EPA Science Inventory

    Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...

  7. BOOTSTRAPPING AND MONTE CARLO METHODS OF POWER ANALYSIS USED TO ESTABLISH CONDITION CATEGORIES FOR BIOTIC INDICES

    EPA Science Inventory

    Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...

  8. Water and tissue equivalence of a new PRESAGE{sup Registered-Sign} formulation for 3D proton beam dosimetry: A Monte Carlo study

    SciTech Connect

    Gorjiara, Tina; Kuncic, Zdenka; Doran, Simon; Adamovics, John; Baldock, Clive

    2012-11-15

    Purpose: To evaluate the water and tissue equivalence of a new PRESAGE{sup Registered-Sign} 3D dosimeter for proton therapy. Methods: The GEANT4 software toolkit was used to calculate and compare total dose delivered by a proton beam with mean energy 62 MeV in a PRESAGE{sup Registered-Sign} dosimeter, water, and soft tissue. The dose delivered by primary protons and secondary particles was calculated. Depth-dose profiles and isodose contours of deposited energy were compared for the materials of interest. Results: The proton beam range was found to be Almost-Equal-To 27 mm for PRESAGE{sup Registered-Sign }, 29.9 mm for soft tissue, and 30.5 mm for water. This can be attributed to the lower collisional stopping power of water compared to soft tissue and PRESAGE{sup Registered-Sign }. The difference between total dose delivered in PRESAGE{sup Registered-Sign} and total dose delivered in water or tissue is less than 2% across the entire water/tissue equivalent range of the proton beam. The largest difference between total dose in PRESAGE{sup Registered-Sign} and total dose in water is 1.4%, while for soft tissue it is 1.8%. In both cases, this occurs at the distal end of the beam. Nevertheless, the authors find that PRESAGE{sup Registered-Sign} dosimeter is overall more tissue-equivalent than water-equivalent before the Bragg peak. After the Bragg peak, the differences in the depth doses are found to be due to differences in primary proton energy deposition; PRESAGE{sup Registered-Sign} and soft tissue stop protons more rapidly than water. The dose delivered by secondary electrons in the PRESAGE{sup Registered-Sign} differs by less than 1% from that in soft tissue and water. The contribution of secondary particles to the total dose is less than 4% for electrons and Almost-Equal-To 1% for protons in all the materials of interest. Conclusions: These results demonstrate that the new PRESAGE{sup Registered-Sign} formula may be considered both a tissue- and water

  9. Factor Analysis with Ordinal Indicators: A Monte Carlo Study Comparing DWLS and ULS Estimation

    ERIC Educational Resources Information Center

    Forero, Carlos G.; Maydeu-Olivares, Alberto; Gallardo-Pujol, David

    2009-01-01

    Factor analysis models with ordinal indicators are often estimated using a 3-stage procedure where the last stage involves obtaining parameter estimates by least squares from the sample polychoric correlations. A simulation study involving 324 conditions (1,000 replications per condition) was performed to compare the performance of diagonally…

  10. Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.

    2006-01-01

    A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.

  11. Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.

    2006-01-01

    A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.

  12. Factor Analysis with Ordinal Indicators: A Monte Carlo Study Comparing DWLS and ULS Estimation

    ERIC Educational Resources Information Center

    Forero, Carlos G.; Maydeu-Olivares, Alberto; Gallardo-Pujol, David

    2009-01-01

    Factor analysis models with ordinal indicators are often estimated using a 3-stage procedure where the last stage involves obtaining parameter estimates by least squares from the sample polychoric correlations. A simulation study involving 324 conditions (1,000 replications per condition) was performed to compare the performance of diagonally…

  13. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  14. Quantum Monte Carlo for Molecules.

    DTIC Science & Technology

    1984-11-01

    AD-A148 159 QUANTUM MONTE CARLO FOR MOLECULES(U) CALIFORNIA UNIV Y BERKELEY LAWRENCE BERKELEY LAB W A LESTER ET AL. Si NOV 84 NOSUi4-83-F-Oifi...ORG. REPORT NUMBER 00 QUANTUM MONTE CARLO FOR MOLECULES ’Ids 7. AUTHOR(e) S. CONTRACT Or GRANT NUMER(e) William A. Lester, Jr. and Peter J. Reynolds...unlimited. ..’.- • p. . ° 18I- SUPPLEMENTARY NOTES IS. KEY WORDS (Cent/Rue an "Worse aide If noeesean d entlt by block fmamabr) Quantum Monte Carlo importance

  15. Monte Carlo Analysis of Airport Throughput and Traffic Delays Using Self Separation Procedures

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Sturdy, James L.

    2006-01-01

    This paper presents the results of three simulation studies of throughput and delay times of arrival and departure operations performed at non-towered, non-radar airports using self-separation procedures. The studies were conducted as part of the validation process of the Small Aircraft Transportation Systems Higher Volume Operations (SATS HVO) concept and include an analysis of the predicted airport capacity using with different traffic conditions and system constraints under increasing levels of demand. Results show that SATS HVO procedures can dramatically increase capacity at non-towered, non-radar airports and that the concept offers the potential for increasing capacity of the overall air transportation system.

  16. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  17. Wormhole Hamiltonian Monte Carlo.

    PubMed

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2014-07-31

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.

  18. Adaptive Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Fasnacht, Marc

    We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.

  19. Monte Carlo analysis of the slightly enriched uranium-D/sub 2/O critical experiment LTRIIA (AWBA Development Program)

    SciTech Connect

    Hardy, J. Jr.; Shore, J.M.

    1981-11-01

    The Savannah River Laboratory LTRIIA slightly-enriched uranium-D/sub 2/O critical experiment was analyzed with ENDF/B-IV data and the RCP01 Monte Carlo program, which modeled the entire assembly in explicit detail. The integral parameters delta/sup 25/ and delta/sup 28/ showed good agreement with experiment. However, calculated K/sub eff/ was 2 to 3% low, due primarily to an overprediction of U238 capture. This is consistent with results obtained in similar analyses of the H/sub 2/O-moderated TRX critical experiments. In comparisons with the VIM and MCNP2 Monte Carlo programs, good agreement was observed for calculated reeaction rates in the B/sup 2/=0 cell.

  20. Monte Carlo simulation for correlation analysis of average glandular dose by breast thickness and glandular ratio in breast tissue.

    PubMed

    Kim, Sang-Tae; Cho, Jung-Keun

    2014-01-01

    A glandular breast tissue is a radio-sensitive tissue. So during the evaluation of an X-ray mammography device, Average Glandular Dose (AGD) measurement is a very important part. In reality, it is difficult to measure AGD directly, Monte Carlo simulation was used to analyze the correlation between the AGD and breast thickness. As a result, AGDs calculated through the Monte Carlo simulation were 1.64, 1.41 and 0.88 mGy. The simulated AGDs mainly depend on the glandular ratio of the breast. With the increase of glandular breast tissue, absorption of low photon-energy increased so that the AGDs increased, too. In addition, the thicker the breast was, the more the AGD became. Consequently, this study will be used as basic data for establishing the diagnostic reference levels of mammography.

  1. Effect of statistical uncertainties on Monte Carlo treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Jiang, S. B.; Pawlicki, T.; Xiong, W.; Qin, L. H.; Yang, J.

    2005-03-01

    This paper reviews the effect of statistical uncertainties on radiotherapy treatment planning using Monte Carlo simulations. We discuss issues related to the statistical analysis of Monte Carlo dose calculations for realistic clinical beams using various variance reduction or time saving techniques. We discuss the effect of statistical uncertainties on dose prescription and monitor unit calculation for conventional treatment and intensity-modulated radiotherapy (IMRT) based on Monte Carlo simulations. We show the effect of statistical uncertainties on beamlet dose calculation and plan optimization for IMRT and other advanced treatment techniques such as modulated electron radiotherapy (MERT). We provide practical guidelines for the clinical implementation of Monte Carlo treatment planning and show realistic examples of Monte Carlo based IMRT and MERT plans.

  2. Acoustic effects analysis utilizing speckle pattern with fixed-particle Monte Carlo

    NASA Astrophysics Data System (ADS)

    Vakili, Ali; Hollmann, Joseph A.; Holt, R. Glynn; DiMarzio, Charles A.

    2016-03-01

    Optical imaging in a turbid medium is limited because of multiple scattering a photon undergoes while traveling through the medium. Therefore, optical imaging is unable to provide high resolution information deep in the medium. In the case of soft tissue, acoustic waves unlike light, can travel through the medium with negligible scattering. However, acoustic waves cannot provide medically relevant contrast as good as light. Hybrid solutions have been applied to use the benefits of both imaging methods. A focused acoustic wave generates a force inside an acoustically absorbing medium known as acoustic radiation force (ARF). ARF induces particle displacement within the medium. The amount of displacement is a function of mechanical properties of the medium and the applied force. To monitor the displacement induced by the ARF, speckle pattern analysis can be used. The speckle pattern is the result of interfering optical waves with different phases. As light travels through the medium, it undergoes several scattering events. Hence, it generates different scattering paths which depends on the location of the particles. Light waves that travel along these paths have different phases (different optical path lengths). ARF induces displacement to scatterers within the acoustic focal volume, and changes the optical path length. In addition, temperature rise due to conversion of absorbed acoustic energy to heat, changes the index of refraction and therefore, changes the optical path length of the scattering paths. The result is a change in the speckle pattern. Results suggest that the average change in the speckle pattern measures the displacement of particles and temperature rise within the acoustic wave focal area, hence can provide mechanical and thermal properties of the medium.

  3. Monte Carlo algorithms for Brownian phylogenetic models.

    PubMed

    Horvilleur, Benjamin; Lartillot, Nicolas

    2014-11-01

    Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Monte Carlo algorithms for Brownian phylogenetic models

    PubMed Central

    Horvilleur, Benjamin; Lartillot, Nicolas

    2014-01-01

    Motivation: Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. Results: A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. Availability: The program is freely available at www.phylobayes.org Contact: nicolas.lartillot@univ-lyon1.fr PMID:25053744

  5. Isotropic Monte Carlo Grain Growth

    SciTech Connect

    Mason, J.

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  6. SU-E-T-761: TOMOMC, A Monte Carlo-Based Planning VerificationTool for Helical Tomotherapy

    SciTech Connect

    Chibani, O; Ma, C

    2015-06-15

    Purpose: Present a new Monte Carlo code (TOMOMC) to calculate 3D dose distributions for patients undergoing helical tomotherapy treatments. TOMOMC performs CT-based dose calculations using the actual dynamic variables of the machine (couch motion, gantry rotation, and MLC sequences). Methods: TOMOMC is based on the GEPTS (Gama Electron and Positron Transport System) general-purpose Monte Carlo system (Chibani and Li, Med. Phys. 29, 2002, 835). First, beam models for the Hi-Art Tomotherpy machine were developed for the different beam widths (1, 2.5 and 5 cm). The beam model accounts for the exact geometry and composition of the different components of the linac head (target, primary collimator, jaws and MLCs). The beams models were benchmarked by comparing calculated Pdds and lateral/transversal dose profiles with ionization chamber measurements in water. See figures 1–3. The MLC model was tuned in such a way that tongue and groove effect, inter-leaf and intra-leaf transmission are modeled correctly. See figure 4. Results: By simulating the exact patient anatomy and the actual treatment delivery conditions (couch motion, gantry rotation and MLC sinogram), TOMOMC is able to calculate the 3D patient dose distribution which is in principal more accurate than the one from the treatment planning system (TPS) since it relies on the Monte Carlo method (gold standard). Dose volume parameters based on the Monte Carlo dose distribution can also be compared to those produced by the TPS. Attached figures show isodose lines for a H&N patient calculated by TOMOMC (transverse and sagittal views). Analysis of differences between TOMOMC and TPS is an ongoing work for different anatomic sites. Conclusion: A new Monte Carlo code (TOMOMC) was developed for Tomotherapy patient-specific QA. The next step in this project is implementing GPU computing to speed up Monte Carlo simulation and make Monte Carlo-based treatment verification a practical solution.

  7. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    SciTech Connect

    Goluoglu, Sedat; Bekar, Kursat B; Wiarda, Dorothea

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  8. Behavioral Analysis of Visitors to a Medical Institution’s Website Using Markov Chain Monte Carlo Methods

    PubMed Central

    Tani, Yuji

    2016-01-01

    Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the

  9. CMB quadrupole depression produced by early fast-roll inflation: Monte Carlo Markov chains analysis of WMAP and SDSS data

    SciTech Connect

    Destri, C.; Vega, H. J. de; Sanchez, N. G.

    2008-07-15

    Generically, the classical evolution of the inflaton has a brief fast-roll stage that precedes the slow-roll regime. The fast-roll stage leads to a purely attractive potential in the wave equations of curvature and tensor perturbations (while the potential is purely repulsive in the slow-roll stage). This attractive potential leads to a depression of the CMB quadrupole moment for the curvature and B-mode angular power spectra. A single new parameter emerges in this way in the early universe model: the comoving wave number k{sub 1} characteristic scale of this attractive potential. This mode k{sub 1} happens to exit the horizon precisely at the transition from the fast-roll to the slow-roll stage. The fast-roll stage dynamically modifies the initial power spectrum by a transfer function D(k). We compute D(k) by solving the inflaton evolution equations. D(k) effectively suppresses the primordial power for kMonte Carlo Markov chain analysis of the WMAP and SDSS data including the fast-roll stage and find the value k{sub 1}=0.266 Gpc{sup -1}. The quadrupole mode k{sub Q}=0.242 Gpc{sup -1} exits the horizon earlier than k{sub 1}, about one-tenth of an e-fold before the end of fast roll. We compare the fast-roll fit with a fit without fast roll but including a sharp lower cutoff on the primordial power. Fast roll provides a slightly better fit than a sharp cutoff for the temperature-temperature, temperature-E modes, and E modes-E modes. Moreover, our fits provide nonzero lower bounds for r, while the values of the other cosmological parameters are essentially those of the pure {lambda}CDM model. We display the real space two point C{sup TT}({theta}) correlator. The fact that k{sub Q} exits the horizon before the slow-roll stage implies an upper bound in the total number of e-folds N{sub tot} during inflation. Combining this with estimates during the

  10. Markov chain Monte Carlo analysis for the selection of a cell-killing model under high-dose-rate irradiation.

    PubMed

    Matsuya, Yusuke; Kimura, Takaaki; Date, Hiroyuki

    2017-08-08

    High-dose-rate irradiation with 6 MV linac x rays is a wide-spread means to treat cancer tissue in radiotherapy. The treatment planning relies on a mathematical description of surviving fraction (SF), such as the linear-quadratic model (LQM) formula. However, even in the case of high-dose-rate treatment, the repair kinetics of DNA damage during dose-delivery time plays a function in predicting the dose-SF relation. This may call the SF model selection into question when considering the dose-delivery time or dose-rate effects (DREs) in radiotherapy and in vitro cell experiments. In this study, we demonstrate the importance of dose-delivery time at high-dose-rate irradiations used in radiotherapy by means of Bayesian estimation. To evaluate the model selection for SF, three types of models, the LQM and two microdosimetric-kinetic models with and without DREs (MKMDR and MKM) were applied to describe in vitroSF data (our work and references). The parameters in each model were evaluated by a Markov chain Monte Carlo (MCMC) simulation. The MCMC analysis shows that the cell survival curve by the MKMDR fits the experimental data the best in terms of the deviance information criterion (DIC). In the fractionated regimen with 30 fractions to a total dose of 60 Gy, the final cell survival estimated by the MKMDR was higher than that by the LQM. This suggests that additional fractions are required for attaining the total dose equivalent to yield the same effect as the conventional regimen using the LQM in fractionated radiotherapy. Damage repair during dose-delivery time plays a key role in precisely estimating cell survival even at a high dose rate in radiotherapy. Consequently, it was suggested that the cell-killing model without repair factor during a short dose-delivery time may overestimate actual cell killing in fractionated radiotherapy. © 2017 American Association of Physicists in Medicine.

  11. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  12. Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield

    NASA Astrophysics Data System (ADS)

    Cramer, S. N.; Roussin, R. W.

    1981-11-01

    A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.

  13. RunMC—an object-oriented analysis framework for Monte Carlo simulation of high-energy particle collisions

    NASA Astrophysics Data System (ADS)

    Chekanov, S.

    2005-12-01

    RunMC is an object-oriented framework aimed to generate and to analyse high-energy collisions of elementary particles using Monte Carlo simulations. This package, being based on C++ adopted by CERN as the main programming language for the LHC experiments, provides a common interface to different Monte Carlo models using modern physics libraries. Physics calculations (projects) can easily be loaded and saved as external modules. This simplifies the development of complicated calculations for high-energy physics in large collaborations. This desktop program is open-source licensed and is available on the LINUX and Windows/Cygwin platforms. Program summaryTitle of program: RunMC version 3.3 Catalogue identifier: ADWH Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWH Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer: x86, SGI, Sun Microsystems Operating system: Linux, Windows/Cygwin Memory required: 32 Mbytes No. of bits in a word: 32 No. of processors used: 1 Parallelized?: No No of lines in distributed program, including test data, etc.:≈1000000 No of bytes in distributed program, including test data, etc.: 22 464 383 Distribution format: tar.gz Typical running time: 0.004-0.01 s per event Programming language used: C/C++, Fortran, Java, bash Program requirements: g77, g++, make, X11, Java JRE1.4 and higher Nature of the physical problem: Simulation of high-energy collisions of elementary particles Method of solution: Monte Carlo method External libraries: CLHEP, ROOT, CERNLIB with PDFLIB References:http://hepforge.cedar.ac.uk/runmc/, http://www.hep.anl.gov/chakanau/runmc/

  14. Implementation and performance analysis of bridging Monte Carlo moves for off-lattice single chain polymers in globular states

    NASA Astrophysics Data System (ADS)

    Reith, Daniel; Virnau, Peter

    2010-04-01

    Bridging algorithms are global Monte Carlo moves which allow for an efficient sampling of single polymer chains. In this manuscript we discuss the adaptation of three bridging algorithms from lattice to continuum models, and give details on the corrections to the acceptance rules which are required to fulfill detailed balance. For the first time we are able to compare the efficiency of the moves by analyzing the occurrence of knots in globular states. For a flexible homopolymer chain of length N=1000, independent configurations can be generated up to two orders of magnitude faster than with slithering snake moves.

  15. Pharmacokinetics, milk penetration and PK/PD analysis by Monte Carlo simulation of marbofloxacin, after intravenous and intramuscular administration to lactating goats.

    PubMed

    Lorenzutti, A M; Litterio, N J; Himelfarb, M A; Zarazaga, M D P; San Andrés, M I; De Lucas, J J

    2017-05-04

    The main objectives of this study were (i) to evaluate the serum pharmacokinetic behaviour and milk penetration of marbofloxacin (MFX; 5 mg/kg), after intravenous (IV) and intramuscular (IM) administration in lactating goats and simulate a multidose regimen on steady-state conditions, (ii) to determine the minimum inhibitory concentration (MIC) and mutant prevention concentration (MPC) of coagulase negative staphylococci (CNS) isolated from caprine mastitis in Córdoba, Argentina and (iii) to make a PK/PD analysis by Monte Carlo simulation from steady-state pharmacokinetic parameters of MFX by IV and IM routes to evaluate the efficacy and risk of the emergence of resistance. The study was carried out with six healthy, female, adult Anglo Nubian lactating goats. Marbofloxacin was administered at 5 mg/kg bw by IV and IM route. Serum and milk concentrations of MFX were determined with HPLC/uv. From 106 regional strains of CNS isolated from caprine mastitis in herds from Córdoba, Argentina, MICs and MPCs were determined. MIC90 and MPC90 were 0.4 and 6.4 μg/ml, respectively. MIC and MPC-based PK/PD analysis by Monte Carlo simulation indicates that IV and IM administration of MFX in lactating goats may not be adequate to recommend it as an empirical therapy against CNS, because the most exigent endpoints were not reached. Moreover, this dose regimen could increase the probability of selecting mutants and resulting in emergence of resistance. Based on the results of Monte Carlo simulation, the optimal dose of MFX to achieve an adequate antimicrobial efficacy should be 10 mg/kg, but it is important take into account that fluoroquinolones are substrates of efflux pumps, and this fact may determine that assumption of linear pharmacokinetics at high doses of MFX may be incorrect. © 2017 John Wiley & Sons Ltd.

  16. A quantitative three-dimensional dose attenuation analysis around Fletcher-Suit-Delclos due to stainless steel tube for high-dose-rate brachytherapy by Monte Carlo calculations.

    PubMed

    Parsai, E Ishmael; Zhang, Zhengdong; Feldmeier, John J

    2009-01-01

    The commercially available brachytherapy treatment-planning systems today, usually neglects the attenuation effect from stainless steel (SS) tube when Fletcher-Suit-Delclos (FSD) is used in treatment of cervical and endometrial cancers. This could lead to potential inaccuracies in computing dwell times and dose distribution. A more accurate analysis quantifying the level of attenuation for high-dose-rate (HDR) iridium 192 radionuclide ((192)Ir) source is presented through Monte Carlo simulation verified by measurement. In this investigation a general Monte Carlo N-Particles (MCNP) transport code was used to construct a typical geometry of FSD through simulation and compare the doses delivered to point A in Manchester System with and without the SS tubing. A quantitative assessment of inaccuracies in delivered dose vs. the computed dose is presented. In addition, this investigation expanded to examine the attenuation-corrected radial and anisotropy dose functions in a form parallel to the updated AAPM Task Group No. 43 Report (AAPM TG-43) formalism. This will delineate quantitatively the inaccuracies in dose distributions in three-dimensional space. The changes in dose deposition and distribution caused by increased attenuation coefficient resulted from presence of SS are quantified using MCNP Monte Carlo simulations in coupled photon/electron transport. The source geometry was that of the Vari Source wire model VS2000. The FSD was that of the Varian medical system. In this model, the bending angles of tandem and colpostats are 15 degrees and 120 degrees , respectively. We assigned 10 dwell positions to the tandem and 4 dwell positions to right and left colpostats or ovoids to represent a typical treatment case. Typical dose delivered to point A was determined according to Manchester dosimetry system. Based on our computations, the reduction of dose to point A was shown to be at least 3%. So this effect presented by SS-FSD systems on patient dose is of concern.

  17. Monte Carlo analysis of the degradation of the spectrum produced by an X-ray tube in conventional radiography

    NASA Astrophysics Data System (ADS)

    Jerez-Sainz, I.; Pérez-Rozos, A.; Lallena, A. M.

    2007-09-01

    Medical imaging detectors used in diagnostic radiology are very sensitive to the spectra of the X-ray beams used. In this work, we use Monte Carlo simulation to analyse the degradation of various X-ray spectra at different stages of their travel from the X-tube focus to the detector. Special attention is paid to the antiscatter grid that is normally used in diagnostic radiology. For the present study, we have considered various measured spectra corresponding to accelerating voltages of 60, 80, 90 and 125 kV. We have assumed that the X-ray beams are parallel beams going successively through an aluminium filter, the air gap between the focus and the patient, a patient-equivalent water phantom and a parallel cross antiscatter grid. The Monte Carlo simulation code PENELOPE has been used to find out the X-ray spectrum at three planes: the patient, the grid, and the detector entrance. Additional simulations with monoenergetic beams of 30, 40, 55 and 100 keV have been carried out for comparison. We have investigated in detail the dependence of the attenuation of the primary beam with the initial energy, the percentage of the initial X-rays reaching the grid and the ratio of scatter to primary photons reaching the grid. The role played by the grid has been studied in depth. In particular, we have analysed the modification of the incident spectrum that it produces and its efficiency in removing the scattered photons.

  18. Application of the Monte Carlo method to the analysis of doses and shielding around an X-ray fluorescence equipment

    NASA Astrophysics Data System (ADS)

    Ródenas, José; Juste, Belén; Gallardo, Sergio; Querol, Andrea

    2017-09-01

    An X-ray fluorescence equipment is used for practical exercises in the laboratory of Nuclear Engineering of the Polytechnic University of Valencia (Spain). This equipment includes a compact X-ray tube, ECLIPSE-III, and a Si-PIN XR-100T detector. The voltage (30 kV), and the current (100 μA) of the tube are low enough so that expected doses around the tube do not represent a risk for students working in the laboratory. Nevertheless, doses and shielding should be evaluated to accomplish the ALARA criterion. The Monte Carlo method has been applied to evaluate the dose rate around the installation provided with a shielding composed by a box of methacrylate. Dose rates calculated are compared with experimental measurements to validate the model. Obtained results show that doses are below allowable limits. Hence, no extra shielding is required for the X-ray beam. A previous Monte Carlo model was also developed to obtain the tube spectrum and validated by comparison with data from manufacturer.

  19. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  20. Monte Carlo simulation for the transport beamline

    NASA Astrophysics Data System (ADS)

    Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.

    2013-07-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  1. Accounting for both random errors and systematic errors in uncertainty propagation analysis of computer models involving experimental measurements with Monte Carlo methods.

    PubMed

    Vasquez, Victor R; Whiting, Wallace B

    2005-12-01

    A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.

  2. Predictive uncertainty analysis of a highly heterogeneous field-scale groundwater model using null-space Monte Carlo

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2011-12-01

    Quantification of prediction uncertainty resulting from estimated parameters is critical to provide accurate predictive models for field-scale groundwater flow and transport problems. We examine and compare two approaches to defining predictive uncertainty where both approaches utilize pilot points to parameterize spatially heterogeneous fields. The first approach is the independent calibration of multiple initial "seed" fields created through geostatistical simulation and conditioned to observation data, resulting in an ensemble of calibrated property fields that defines uncertainty in the calibrated parameters. The second approach is the null-space Monte Carlo (NSMC) method that employs a decomposition of the Jacobian matrix from a single calibration to define a minimum number of linear combinations of parameters that account for the majority of the sensitivity of the overall calibration to the observed data. Random vectors are applied to the remaining linear combinations of parameters, the null space, to create an ensemble of fields, each of which remains calibrated to the data. We compare these two approaches using a highly-parameterized groundwater model of the Culebra dolomite in southeastern New Mexico. Observation data include two decades of steady-state head measurements and pumping test results. The predictive performance measure is advective travel time from a point to a prescribed boundary. Calibrated parameters at a set of pilot points include transmissivity, the horizontal hydraulic anisotropy, the storativity, and a section of recharge (> 1200 parameters in total). First, we calibrate 200 multiple random seed fields generated through geostatistical simulation conditioned to observation data. The 11 fields that contain the best and worst scenarios in terms of calibration and travel time analysis among the best 100 calibrated results provide a basis for the NSMC method. The NSMC method is used to generate 200 calibration-constrained parameter fields

  3. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  4. Quantum Monte Carlo for Molecules.

    DTIC Science & Technology

    1986-12-01

    AD-Ml?? Ml SITNEt MNOTE CARLO FOR OLEC ILES U) CALIFORNIA INEZY 1/ BERWLEY LRIWENCE BERKELEY LAB NI A LESTER ET AL UKLff~j~~lD61 DEC 66 MSW14-6 .3...SUMMARY REPORT 4. PERFORMING ORG. REPORT NUMBER S QUANTUM MONTE CARLO FOR MOLECULES ___ IU . AUTHOR(@) S. CONTRACT OR GRANT NUMSKR(.) S William A...DISTRIGUTION STATIEMEN4T (at the abstract entered in Block 20. it different from Report) - Quantum Monte Carlo importance functions molchuiner eqaio

  5. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  6. Probabilistic uncertainty analysis based on Monte Carlo simulations of co-combustion of hazelnut hull and coal blends: Data-driven modeling and response surface optimization.

    PubMed

    Buyukada, Musa

    2017-02-01

    The aim of present study is to investigate the thermogravimetric behaviour of the co-combustion of hazelnut hull (HH) and coal blends using three approaches: multi non-linear regression (MNLR) modeling based on Box-Behnken design (BBD) (1), optimization based on response surface methodology (RSM) (2), and probabilistic uncertainty analysis based on Monte Carlo simulation as a function of blend ratio, heating rate, and temperature (3). The response variable was predicted by the best-fit MNLR model with a predicted regression coefficient (R(2)pred) of 99.5%. Blend ratio of 90/10 (HH to coal, %wt), temperature of 405°C, and heating rate of 44°Cmin(-1) were determined as RSM-optimized conditions with a mass loss of 87.4%. The validation experiments with three replications were performed for justifying the predicted-mass loss percentage and 87.5%±0.2 of mass loss were obtained under RSM-optimized conditions. The probabilistic uncertainty analysis were performed by using Monte Carlo simulations.

  7. Analysis of dense-medium light scattering with applications to corneal tissue: experiments and Monte Carlo simulations.

    PubMed

    Kim, K B; Shanyfelt, L M; Hahn, D W

    2006-01-01

    Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.

  8. Analysis of shielding materials in a Compton spectrometer applied to x-ray tube quality control using Monte Carlo simulation.

    PubMed

    Gallardo, S; Ródenas, J; Verdú, G; Villaescusa, J I

    2005-01-01

    A realistic characterisation of the primary beam is very important for the quality control of X-ray tubes. The most accurate technique to assess the actual photon spectrum is X-ray spectrometry. Some difficulties arising in the spectrum determination can be avoided using a Compton spectrometer. Simulation models are useful tools to know the effect of some operational parameters, such as collimation of primary beam, relative position of focus and detector, and the influence of shielding materials. A simulation model has been developed using the MCNP code, based on the Monte Carlo method, in order to reproduce a commercial Compton spectrometer. In this work, the model developed is applied to analyse the influence on measurements of shielding materials present in the spectrometer.

  9. A Monte Carlo based lookup table for spectrum analysis of turbid media in the reflectance probe regime

    SciTech Connect

    Xiang Wen; Xiewei Zhong; Tingting Yu; Dan Zhu

    2014-07-31

    Fibre-optic diffuse reflectance spectroscopy offers a method for characterising phantoms of biotissue with specified optical properties. For a commercial reflectance probe (six source fibres surrounding a central collection fibre with an inter-fibre spacing of 480 μm; R400-7, Ocean Optics, USA) we have constructed a Monte Carlo based lookup table to create a function called getR(μ{sub a}, μ'{sub s}), where μ{sub a} is the absorption coefficient and μ'{sub s} is the reduced scattering coefficient. Experimental measurements of reflectance from homogeneous calibrated phantoms with given optical properties are compared with the predicted reflectance from the lookup table. The deviation between experiment and prediction is on average 12.1%. (laser biophotonics)

  10. Extended defects in the Potts-percolation model of a solid: renormalization group and Monte Carlo analysis.

    PubMed

    Diep, H T; Kaufman, Miron

    2009-09-01

    We extend the model of a 2d solid to include a line of defects. Neighboring atoms on the defect line are connected by springs of different strength and different cohesive energy with respect to the rest of the system. Using the Migdal-Kadanoff renormalization group we show that the elastic energy is an irrelevant field at the bulk critical point. For zero elastic energy this model reduces to the Potts model. By using Monte Carlo simulations of the three- and four-state Potts model on a square lattice with a line of defects, we confirm the renormalization-group prediction that for a defect interaction larger than the bulk interaction the order parameter of the defect line changes discontinuously while the defect energy varies continuously as a function of temperature at the bulk critical temperature.

  11. Prompt gamma ray analysis of Portland cement sample using keV neutrons with a Maxwellian energy spectrum—a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Naqvi, A. A.

    2003-08-01

    Monte Carlo calculations have been carried out to determine the prompt gamma ray yield from a Portland cement sample using keV neutrons from a 3H(p,n) reaction with a Maxwellian energy distribution with kT=52 keV. This work is a part of wider Monte Carlo studies being conducted at the King Fahd University of Petroleum and Minerals (KFUPM) in search of a more efficient neutron source for its D(d,n) reaction based (2.8 MeV neutrons) Prompt Gamma Ray Neutron Activation Analysis (PGNAA) facility. In this study a 3H(p,n) reaction based prompt gamma ray PGNAA setup was simulated. For comparison purposes, the diameter of a cylindrical external moderator of the 3H(p,n) reaction based PGNAA setup was assumed to be similar to the one used in the KFUPM PGNAA setup. The results of this study revealed that the optimum geometry of the 3H(p,n) reaction based setup is different from that of the KFUPM PGNAA facility. The performance of the 3H(p,n) reaction based setup is also better than that of the 2.8 MeV neutrons based KFUPM facility and its prompt gamma ray yield is about 60-70% higher than that from the 2.8 MeV neutrons based facility. This study has provided a theoretical base for experimental test of a 3H(p,n) reaction based setup.

  12. Facing Challenges for Monte Carlo Analysis of Full PWR Cores : Towards Optimal Detail Level for Coupled Neutronics and Proper Diffusion Data for Nodal Kinetics

    NASA Astrophysics Data System (ADS)

    Nuttin, A.; Capellan, N.; David, S.; Doligez, X.; El Mhari, C.; Méplan, O.

    2014-06-01

    Safety analysis of innovative reactor designs requires three dimensional modeling to ensure a sufficiently realistic description, starting from steady state. Actual Monte Carlo (MC) neutron transport codes are suitable candidates to simulate large complex geometries, with eventual innovative fuel. But if local values such as power densities over small regions are needed, reliable results get more difficult to obtain within an acceptable computation time. In this scope, NEA has proposed a performance test of full PWR core calculations based on Monte Carlo neutron transport, which we have used to define an optimal detail level for convergence of steady state coupled neutronics. Coupling between MCNP for neutronics and the subchannel code COBRA for thermal-hydraulics has been performed using the C++ tool MURE, developed for about ten years at LPSC and IPNO. In parallel with this study and within the same MURE framework, a simplified code of nodal kinetics based on two-group and few-point diffusion equations has been developed and validated on a typical CANDU LOCA. Methods for the computation of necessary diffusion data have been defined and applied to NU (Nat. U) and Th fuel CANDU after assembly evolutions by MURE. Simplicity of CANDU LOCA model has made possible a comparison of these two fuel behaviours during such a transient.

  13. Tracking interacting subcellular structures by sequential Monte Carlo method.

    PubMed

    Wen, Quan; Gao, Jean

    2007-01-01

    With the wide application of green fluorescent protein (GFP) in the study of live cell, which leads to a better understanding of biochemical events at subcellular level, there is a surging need for the computer-aided analysis on the huge amount of image sequence data acquired by the advanced microscopy devices. One of such tasks is the motility analysis of the multiple subcellular structures. In this paper, an algorithm using sequential Monte Carlo (SMC) method for multiple interacting object tracking is proposed. We use joint state to represent all the objects together, and model the interaction between objects in the 2D plane by augmenting an extra dimension and evaluating their overlapping relationship in the 3D space. Markov chain Monte Carlo (MCMC) method with a novel height swap move is applied to sample the joint state distribution efficiently. To facilitate distinguishing between different objects, a new observation method is also proposed by matching the size and intensity profile of the object. The experimental results show that our method is promising.

  14. Evaluating amikacin dosage regimens in intensive care unit patients: a pharmacokinetic/pharmacodynamic analysis using Monte Carlo simulation.

    PubMed

    Zazo, Hinojal; Martín-Suárez, Ana; Lanao, José M

    2013-08-01

    The objectives of this study were to conduct a comparative pharmacokinetic/pharmacodynamic (PK/PD) evaluation using Monte Carlo simulation of conventional versus high-dose extended-interval dosage (HDED) regimens of amikacin (AMK) in intensive care unit (ICU) patients for an Acinetobacter baumannii infection model. The simulation was performed in five populations (a control population and four subpopulations of ICU patients). Using a specific AMK PK/PD model and Monte Carlo simulation, the following were generated: simulated AMK steady-state plasma level curves; PK/PD efficacy indexes [time during which the serum drug concentration remains above the minimum inhibitory concentration (MIC) for a dosing period (%T>MIC) and ratio of peak serum concentration to MIC (Cmax/MIC)]; evolution of bacterial growth curves; and adaptive resistance to treatment. A higher probability of bacterial resistance was observed with the HDED regimen compared with the conventional dosage regimen. A statistically significant increase in Cmax/MIC and a statistically significant reduction in %T>MIC with the HDED regimen were obtained. A multiple linear relationship between CFU values at 24h with Cmax/MIC and %T>MIC was obtained. In conclusion, with the infection model tested, the likelihood of resistance to treatment may be higher against pathogens with a high MIC with the HDED regimen, considering that in many ICU patients the %T>MIC may be limited. If a sufficient value of %T>MIC (≥60%) is not reached, even though the Cmax/MIC is high, the therapeutic efficacy of the treatment may not be guaranteed. This study indicates that different AMK dosing strategies could directly influence the efficacy results in ICU patients.

  15. Multilevel sequential Monte Carlo samplers

    SciTech Connect

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.

  16. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  17. Multilevel sequential Monte Carlo samplers

    SciTech Connect

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.

  18. Monte Carlo Particle Lists: MCPL

    NASA Astrophysics Data System (ADS)

    Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.

    2017-09-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  19. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  20. Suitable Candidates for Monte Carlo Solutions.

    ERIC Educational Resources Information Center

    Lewis, Jerome L.

    1998-01-01

    Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)

  1. A Classroom Note on Monte Carlo Integration.

    ERIC Educational Resources Information Center

    Kolpas, Sid

    1998-01-01

    The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)

  2. Applications of Monte Carlo Methods in Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  3. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  4. Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study.

    PubMed

    Gear, J I; Charles-Edwards, E; Partridge, M; Flux, G D

    2011-11-21

    This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer

  5. Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study

    NASA Astrophysics Data System (ADS)

    Gear, J. I.; Charles-Edwards, E.; Partridge, M.; Flux, G. D.

    2011-11-01

    This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer

  6. SPECIAL ISSUE DEVOTED TO MULTIPLE RADIATION SCATTERING IN RANDOM MEDIA: Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    NASA Astrophysics Data System (ADS)

    Bashkatov, A. N.; Genina, Elina A.; Kochubei, V. I.; Tuchin, Valerii V.

    2006-12-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates.

  7. Monte Carlo docking with ubiquitin.

    PubMed Central

    Cummings, M. D.; Hart, T. N.; Read, R. J.

    1995-01-01

    The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as

  8. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.

  9. Analysis of uncertainties in Monte Carlo simulated organ and effective dose in chest CT: scanner- and scan-related factors

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Liptak, Chris L.; Dong, Frank F.; Segars, W. Paul; Primak, Andrew N.; Li, Xiang

    2017-04-01

    In Monte Carlo simulation of CT dose, many input parameters are required (e.g. bowtie filter properties and scan start/end location). Our goal was to examine the uncertainties in patient dose when input parameters were inaccurate. Using a validated Monte Carlo program, organ dose from a chest CT scan was simulated for an average-size female phantom using a reference set of input parameter values (treated as the truth). Additional simulations were performed in which errors were purposely introduced into the input parameter values. The effects on four dose quantities were analyzed: organ dose (mGy/mAs), effective dose (mSv/mAs), CTDIvol-normalized organ dose (unitless), and DLP-normalized effective dose (mSv/mGy · cm). At 120 kVp, when spectral half value layer deviated from its true value by  ±1.0 mm Al, the four dose quantities had errors of 18%, 7%, 14% and 2%, respectively. None of the dose quantities were affected significantly by errors in photon path length through the graphite section of the bowtie filter; path length error as large as 5 mm produced dose errors of  ⩽2%. In contrast, error of this magnitude in the aluminum section produced dose errors of  ⩽14%. At a total collimation of 38.4 mm, when radiation beam width deviated from its true value by  ±  3 mm, dose errors were  ⩽7%. Errors in tube starting angle had little impact on effective dose (errors  ⩽  1%) however, they produced organ dose errors as high as 66%. When the assumed scan length was longer by 4 cm than the truth, organ dose errors were up to 137%. The corresponding error was 24% for effective dose, but only 3% for DLP-normalized effective dose. Lastly, when the scan isocenter deviated from the patient’s anatomical center by 5 cm, organ and effective dose errors were up 18% and 8%, respectively.

  10. Observation and a Monte Carlo Analysis of the Energetic Radiation Associated with Winter Thunderstorm Activities in Japan.

    NASA Astrophysics Data System (ADS)

    Torii, T.; Okuyama, S.; Ishiduka, A.; Nozaki, T.; Nawa, Y.; Sugita, T.

    2005-12-01

    Gamma ray dose-rate increases associated with winter thunderstorm activities have been observed at the coastal area of the Sea of Japan.The following features are clarified from the measured data obtained by environmental radiation monitors: 1. Almost all dose-rate increases during thunderstorms are from several to several 10 times of the background levels at each monitoring point. 2. The rise time of the dose-rate enhancement is about several ten seconds. 3. The affected areas of the enhanced radiation seemed to be quite local, because in many cases, only one or two of the monitors situated several hundred meters away from each other showed dose-rate increases. 4. The starting time of the dose-rate increase of two monitors at a position several 100 meters away is not always be simultaneously, and time lags of ten seconds are sometimes observed. 5. From the indicated values measured by two type detectors (NaI(Tl) scintillator, Ionization Chamber) of the monitor, this suggests that energetic radiation (several MeV) might be emitted at the time of dose-rate increasing. In order to investigate the generation of energetic radiation which originates in thunderstorm electric fields, we have calculated the behavior of secondary cosmic rays (electromagnetic component; muon) in electric fields with Monte Carlo method. In the calculation, the electron and photon fluxes have increased greatly in the region where the field strength exceeds about 280 P(z) kV/m-atm, where P(z) is the atmospheric pressure (atm) at altitude z (m),.and these energy spectra show a large increase in the energy region up to a few tens of MeV. We have also carried out the Monte Carlo calculations of the beta and gamma rays emitted by radon progenies in thunderstorm electric fields. By the calculation for the radon progeny, the electron flux shows notable increases in the strong electric field region, while the photon flux does not fluctuate significantly. As well as the secondary cosmic rays, the radon

  11. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) in the recent CMB data

    SciTech Connect

    Kim, Jaiseung

    2011-04-01

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.

  12. A joint Monte Carlo analysis of seafloor compliance, Rayleigh wave dispersion and receiver functions at ocean bottom seismic stations offshore New Zealand

    NASA Astrophysics Data System (ADS)

    Ball, Justin S.; Sheehan, Anne F.; Stachnik, Joshua C.; Lin, Fan-Chi; Collins, John A.

    2014-12-01

    body-wave imaging techniques such as receiver function analysis can be notoriously difficult to employ on ocean-bottom seismic data due largely to multiple reverberations within the water and low-velocity sediments. In lieu of suppressing this coherently scattered noise in ocean-bottom receiver functions, these site effects can be modeled in conjunction with shear velocity information from seafloor compliance and surface wave dispersion measurements to discern crustal structure. A novel technique to estimate 1-D crustal shear-velocity profiles from these data using Monte Carlo sampling is presented here. We find that seafloor compliance inversions and P-S conversions observed in the receiver functions provide complimentary constraints on sediment velocity and thickness. Incoherent noise in receiver functions from the MOANA ocean bottom seismic experiment limit the accuracy of the practical analysis at crustal scales, but synthetic recovery tests and comparison with independent unconstrained nonlinear optimization results affirm the utility of this technique in principle.

  13. Kullback-Leibler Markov chain Monte Carlo--a new algorithm for finite mixture analysis and its application to gene expression data.

    PubMed

    Tatarinova, Tatiana; Bouck, John; Schumitzky, Alan

    2008-08-01

    In this paper, we study Bayesian analysis of nonlinear hierarchical mixture models with a finite but unknown number of components. Our approach is based on Markov chain Monte Carlo (MCMC) methods. One of the applications of our method is directed to the clustering problem in gene expression analysis. From a mathematical and statistical point of view, we discuss the following topics: theoretical and practical convergence problems of the MCMC method; determination of the number of components in the mixture; and computational problems associated with likelihood calculations. In the existing literature, these problems have mainly been addressed in the linear case. One of the main contributions of this paper is developing a method for the nonlinear case. Our approach is based on a combination of methods including Gibbs sampling, random permutation sampling, birth-death MCMC, and Kullback-Leibler distance.

  14. Markov Chain Monte Carlo approaches to analysis of genetic and environmental components of human developmental change and G x E interaction.

    PubMed

    Eaves, Lindon; Erkanli, Alaattin

    2003-05-01

    The linear structural model has provided the statistical backbone of the analysis of twin and family data for 25 years. A new generation of questions cannot easily be forced into the framework of current approaches to modeling and data analysis because they involve nonlinear processes. Maximizing the likelihood with respect to parameters of such nonlinear models is often cumbersome and does not yield easily to current numerical methods. The application of Markov Chain Monte Carlo (MCMC) methods to modeling the nonlinear effects of genes and environment in MZ and DZ twins is outlined. Nonlinear developmental change and genotype x environment interaction in the presence of genotype-environment correlation are explored in simulated twin data. The MCMC method recovers the simulated parameters and provides estimates of error and latent (missing) trait values. Possible limitations of MCMC methods are discussed. Further studies are necessary explore the value of an approach that could extend the horizons of research in developmental genetic epidemiology.

  15. KULLBACK-LEIBLER MARKOV CHAIN MONTE CARLO — A NEW ALGORITHM FOR FINITE MIXTURE ANALYSIS AND ITS APPLICATION TO GENE EXPRESSION DATA

    PubMed Central

    TATARINOVA, TATIANA; BOUCK, JOHN; SCHUMITZKY, ALAN

    2009-01-01

    In this paper, we study Bayesian analysis of nonlinear hierarchical mixture models with a finite but unknown number of components. Our approach is based on Markov chain Monte Carlo (MCMC) methods. One of the applications of our method is directed to the clustering problem in gene expression analysis. From a mathematical and statistical point of view, we discuss the following topics: theoretical and practical convergence problems of the MCMC method; determination of the number of components in the mixture; and computational problems associated with likelihood calculations. In the existing literature, these problems have mainly been addressed in the linear case. One of the main contributions of this paper is developing a method for the nonlinear case. Our approach is based on a combination of methods including Gibbs sampling, random permutation sampling, birth-death MCMC, and Kullback-Leibler distance. PMID:18763739

  16. A Monte Carlo Analysis of Weight Data from UF6 Cylinder Feed and Withdrawal Stations

    SciTech Connect

    Garner, James R; Whitaker, J Michael

    2015-01-01

    As the number of nuclear facilities handling uranium hexafluoride (UF6) cylinders (e.g., UF6 production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge National Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.

  17. Analysis of light incident location and detector position in early diagnosis of knee osteoarthritis by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Chen, Yanping; Chen, Yisha; Yan, Huangping; Wang, Xiaoling

    2017-01-01

    Early detection of knee osteoarthritis (KOA) is meaningful to delay or prevent the onset of osteoarthritis. In consideration of structural complexity of knee joint, position of light incidence and detector appears to be extremely important in optical inspection. In this paper, the propagation of 780-nm near infrared photons in three-dimensional knee joint model is simulated by Monte Carlo (MC) method. Six light incident locations are chosen in total to analyze the influence of incident and detecting location on the number of detected signal photons and signal to noise ratio (SNR). Firstly, a three-dimensional photon propagation model of knee joint is reconstructed based on CT images. Then, MC simulation is performed to study the propagation of photons in three-dimensional knee joint model. Photons which finally migrate out of knee joint surface are numerically analyzed. By analyzing the number of signal photons and SNR from the six given incident locations, the optimal incident and detecting location is defined. Finally, a series of phantom experiments are conducted to verify the simulation results. According to the simulation and phantom experiments results, the best incident location is near the right side of meniscus at the rear end of left knee joint and the detector is supposed to be set near patella, correspondingly.