Science.gov

Sample records for 3-d monte-carlo analysis

  1. Monte Carlo Reliability Analysis.

    DTIC Science & Technology

    1987-10-01

    to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction

  2. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  3. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  4. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  5. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  6. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  7. Monte Carlo generators for studies of the 3D structure of the nucleon

    DOE PAGES

    Avakian, Harut; D'Alesio, U.; Murgia, F.

    2015-01-23

    In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.

  8. PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, T.J.

    1998-12-01

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  9. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, Timothy J.

    1998-01-13

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  10. Monte Carlo Modeling of Thin Film Deposition: Factors that Influence 3D Islands

    SciTech Connect

    Gilmer, G H; Dalla Torre, J; Baumann, F H; Diaz de la Rubia, T

    2002-01-04

    In this paper we discuss the use of atomistic Monte Carlo simulations to predict film microstructure evolution. We discuss physical vapor deposition, and are primarily concerned with films that are formed by the nucleation and coalescence of 3D islands. Multi-scale modeling is used in the sense that information obtained from molecular dynamics and first principles calculations provide atomic interaction energies, surface and grain boundary properties and diffusion rates for use in the Monte Carlo model. In this paper, we discuss some fundamental issues associated with thin film formation, together with an assessment of the sensitivity of the film morphology to the deposition conditions and materials properties.

  11. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  12. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  13. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  14. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    PubMed

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  15. 3-D Direct Simulation Monte Carlo modeling of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Su, C.; Finklenburg, S.; Rubin, M.; Ip, W.; Keller, H.; Knollenberg, J.; Kührt, E.; Lai, I.; Skorov, Y.; Thomas, N.; Wu, J.; Chen, Y.

    2014-07-01

    After deep-space hibernation, ESA's Rosetta spacecraft has been successfully woken up and obtained the first images of comet 67P /Churyumov-Gerasimenko (C-G) in March 2014. It is expected that Rosetta will rendezvous with comet 67P and start to observe the nucleus and coma of the comet in the middle of 2014. As the comet approaches the Sun, a significant increase in activity is expected. Our aim is to understand the physical processes in the coma with the help of modeling in order to interpret the resulting measurements and establish observational and data analysis strategies. DSMC (Direct Simulation Monte Carlo) [1] is a very powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow [2,3]. Comparisons between DSMC and fluid techniques have also been performed to establish the limits of these techniques [2,4]. The drawback with 3D DSMC is that it is computationally highly intensive and thus time consuming. However, the performance can be dramatically increased with parallel computing on Graphic Processor Units (GPUs) [5]. We have already studied a case with comet 9P/Tempel 1 where the Deep Impact observations were used to define the shape of the nucleus and the outflow was simulated with the DSMC approach [6,7]. For comet 67P, we intend to determine the gas flow field in the innermost coma and the surface outgassing properties from analyses of the flow field, to investigate dust acceleration by gas drag, and to compare with observations (including time variability). The boundary conditions are implemented with a nucleus shape model [8] and thermal models which are based on the surface heat-balance equation. Several different parameter sets have been investigated. The calculations have been performed using the PDSC^{++} (Parallel Direct Simulation Monte Carlo) code [9] developed by Wu and his coworkers [10-12]. Simulation tasks can be accomplished within 24

  16. A graphical user interface for calculation of 3D dose distribution using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chow, J. C. L.; Leung, M. K. K.

    2008-02-01

    A software graphical user interface (GUI) for calculation of 3D dose distribution using Monte Carlo (MC) simulation is developed using MATLAB. This GUI (DOSCTP) provides a user-friendly platform for DICOM CT-based dose calculation using EGSnrcMP-based DOSXYZnrc code. It offers numerous features not found in DOSXYZnrc, such as the ability to use multiple beams from different phase-space files, and has built-in dose analysis and visualization tools. DOSCTP is written completely in MATLAB, with integrated access to DOSXYZnrc and CTCREATE. The program function may be divided into four subgroups, namely, beam placement, MC simulation with DOSXYZnrc, dose visualization, and export. Each is controlled by separate routines. The verification of DOSCTP was carried out by comparing plans with different beam arrangements (multi-beam/photon arc) on an inhomogeneous phantom as well as patient CT between the GUI and Pinnacle3. DOSCTP was developed and verified with the following features: (1) a built-in voxel editor to modify CT-based DOSXYZnrc phantoms for research purposes; (2) multi-beam placement is possible, which cannot be achieved using the current DOSXYZnrc code; (3) the treatment plan, including the dose distributions, contours and image set can be exported to a commercial treatment planning system such as Pinnacle3 or to CERR using RTOG format for plan evaluation and comparison; (4) a built-in RTOG-compatible dose reviewer for dose visualization and analysis such as finding the volume of hot/cold spots in the 3D dose distributions based on a user threshold. DOSCTP greatly simplifies the use of DOSXYZnrc and CTCREATE, and offers numerous features that not found in the original user-code. Moreover, since phase-space beams can be defined and generated by the user, it is a particularly useful tool to carry out plans using specifically designed irradiators/accelerators that cannot be found in the Linac library of commercial treatment planning systems.

  17. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  18. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  19. Markov chain Monte Carlo linkage analysis of complex quantitative phenotypes.

    PubMed

    Hinrichs, A; Reich, T

    2001-01-01

    We report a Markov chain Monte Carlo analysis of the five simulated quantitative traits in Genetic Analysis Workshop 12 using the Loki software. Our objectives were to determine the efficacy of the Markov chain Monte Carlo method and to test a new scoring technique. Our initial blind analysis, on replicate 42 (the "best replicate") successfully detected four out of the five disease loci and found no false positives. A power analysis shows that the software could usually detect 4 of the 10 trait/gene combinations at an empirical point-wise p-value of 1.5 x 10(-4).

  20. A Monte Carlo method for combined segregation and linkage analysis

    SciTech Connect

    Guo, S.W. ); Thompson, E.A. )

    1992-11-01

    The authors introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. 40 refs, 5 figs., 5 tabs.

  1. Development, validation, and implementation of a patient-specific Monte Carlo 3D internal dosimetry platform

    NASA Astrophysics Data System (ADS)

    Besemer, Abigail E.

    Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre

  2. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  3. Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems

    SciTech Connect

    Martinez, E.; Monasterio, P.R.; Marian, J.

    2011-02-20

    An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.

  4. Venus resurfacing rates: Constraints provided by 3-D Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Bullock, M. A.; Grinspoon, D. H.; Head, J. W.

    1993-01-01

    A 3-D Monte Carlo model that simulates the evolving surface of Venus under the influence of a flux of impacting objects and a variety of styles of volcanic resurfacing was implemented. For given rates of impact events and resurfacing, the model predicts the size-frequency and areal distributions of surviving impact craters as a function of time. The number of craters partially modified by volcanic events is also calculated as the surface evolves. It was found that a constant, global resurfacing rate of approximately 0.4 km(sup 3)/yr is required to explain the observed distributions of both the entire crater population, and the population of craters partially modified by volcanic processes.

  5. OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics

    PubMed Central

    Liu, Yuming; Jacques, Steven L.; Azimipour, Mehdi; Rogers, Jeremy D.; Pashaie, Ramin; Eliceiri, Kevin W.

    2015-01-01

    Optimizing light delivery for optogenetics is critical in order to accurately stimulate the neurons of interest while reducing nonspecific effects such as tissue heating or photodamage. Light distribution is typically predicted using the assumption of tissue homogeneity, which oversimplifies light transport in heterogeneous brain. Here, we present an open-source 3D simulation platform, OptogenSIM, which eliminates this assumption. This platform integrates a voxel-based 3D Monte Carlo model, generic optical property models of brain tissues, and a well-defined 3D mouse brain tissue atlas. The application of this platform in brain data models demonstrates that brain heterogeneity has moderate to significant impact depending on application conditions. Estimated light density contours can show the region of any specified power density in the 3D brain space and thus can help optimize the light delivery settings, such as the optical fiber position, fiber diameter, fiber numerical aperture, light wavelength and power. OptogenSIM is freely available and can be easily adapted to incorporate additional brain atlases. PMID:26713200

  6. 3D electro-thermal Monte Carlo study of transport in confined silicon devices

    NASA Astrophysics Data System (ADS)

    Mohamed, Mohamed Y.

    The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non

  7. Monte Carlo analysis of satellite debris footprint dispersion

    NASA Technical Reports Server (NTRS)

    Rao, P. P.; Woeste, M. A.

    1979-01-01

    A comprehensive study is performed to investigate satellite debris impact point dispersion using a combination of Monte Carlo statistical analysis and parametric methods. The Monte Carlo technique accounts for nonlinearities in the entry point dispersion, which is represented by a covariance matrix of position and velocity errors. Because downrange distance of impact is a monotonic function of debris ballistic coefficient, a parametric method is useful for determining dispersion boundaries. The scheme is applied in the present analysis to estimate the Skylab footprint dispersions for a controlled reentry. A significant increase in the footprint dispersion is noticed for satellite breakup above a 200,000-ft altitude. A general discussion of the method used for analysis is presented together with some typical results obtained for the Skylab deboost mission, which was designed before NASA abandoned plans for a Skylab controlled reentry.

  8. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  9. RayXpert V1: 3D software for the gamma dose rate calculation by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Peyrard, P. F.; Pourrouquet, P.; Dossat, C.; Thomas, J. C.; Chatry, N.; Lavielle, D.; Chatry, C.

    2014-06-01

    RayXpert has been developed to ease the access to the power and accuracy of the 3D Monte Carlo method in the field of gamma dose rate estimate. Optimization methods have been implemented to address dose calculation behind thick 3D structures. At the same time, the engineering interface makes all the preprocessing tasks (modeling, material settings,…) faster using predefined tables and push button features.

  10. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  11. Monte carlo simulation of 3-D buffered Ca(2+) diffusion in neuroendocrine cells.

    PubMed Central

    Gil, A; Segura, J; Pertusa, J A; Soria, B

    2000-01-01

    Buffered Ca(2+) diffusion in the cytosol of neuroendocrine cells is a plausible explanation for the slowness and latency in the secretion of hormones. We have developed a Monte Carlo simulation to treat the problem of 3-D diffusion and kinetic reactions of ions and buffers. The 3-D diffusion is modeled as a random walk process that follows the path of each ion and buffer molecule, combined locally with a stochastic treatment of the first-order kinetic reactions involved. Such modeling is able to predict [Ca(2+)] and buffer concentration time courses regardless of how low the calcium influx is, and it is therefore a convenient method for dealing with physiological calcium currents and concentrations. We study the effects of the diffusional and kinetic parameters of the model on the concentration time courses as well as on the local equilibrium of buffers with calcium. An in-mobile and fast endogenous buffer as described by, Biophys. J. 72:674-690) was able to reach local equilibrium with calcium; however, the exogenous buffers considered are displaced drastically from equilibrium at the start of the calcium pulse, particularly below the pores. The versatility of the method also allows the effect of different arrangements of calcium channels on submembrane gradients to be studied, including random distribution of calcium channels and channel clusters. The simulation shows how the particular distribution of channels or clusters can be of relevance for secretion in the case where the distribution of release granules is correlated with the channels or clusters. PMID:10620270

  12. Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system

    NASA Astrophysics Data System (ADS)

    Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph

    2016-03-01

    A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.

  13. Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124

    NASA Astrophysics Data System (ADS)

    Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.

    2015-03-01

    Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.

  14. Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124.

    PubMed

    Moreau, M; Buvat, I; Ammour, L; Chouin, N; Kraeber-Bodéré, F; Chérel, M; Carlier, T

    2015-03-21

    Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.

  15. Canopy polarized BRDF simulation based on non-stationary Monte Carlo 3-D vector RT modeling

    NASA Astrophysics Data System (ADS)

    Kallel, Abdelaziz; Gastellu-Etchegorry, Jean Philippe

    2017-03-01

    Vector radiative transfer (VRT) has been largely used to simulate polarized reflectance of atmosphere and ocean. However it is still not properly used to describe vegetation cover polarized reflectance. In this study, we try to propose a 3-D VRT model based on a modified Monte Carlo (MC) forward ray tracing simulation to analyze vegetation canopy reflectance. Two kinds of leaf scattering are taken into account: (i) Lambertian diffuse reflectance and transmittance and (ii) specular reflection. A new method to estimate the condition on leaf orientation to produce reflection is proposed, and its probability to occur, Pl,max, is computed. It is then shown that Pl,max is low, but when reflection happens, the corresponding radiance Stokes vector, Io, is very high. Such a phenomenon dramatically increases the MC variance and yields to an irregular reflectance distribution function. For better regularization, we propose a non-stationary MC approach that simulates reflection for each sunny leaf assuming that its orientation is randomly chosen according to its angular distribution. It is shown in this case that the average canopy reflection is proportional to Pl,max ·Io which produces a smooth distribution. Two experiments are conducted: (i) assuming leaf light polarization is only due to the Fresnel reflection and (ii) the general polarization case. In the former experiment, our results confirm that in the forward direction, canopy polarizes horizontally light. In addition, they show that in inclined forward direction, diagonal polarization can be observed. In the latter experiment, polarization is produced in all orientations. It is particularly pointed out that specular polarization explains just a part of the forward polarization. Diffuse scattering polarizes light horizontally and vertically in forward and backward directions, respectively. Weak circular polarization signal is also observed near the backscattering direction. Finally, validation of the non

  16. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations

    NASA Astrophysics Data System (ADS)

    Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.

    2010-11-01

    We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.

  17. A Monte Carlo Dispersion Analysis of the X-33 Simulation Software

    NASA Technical Reports Server (NTRS)

    Williams, Peggy S.

    2001-01-01

    A Monte Carlo dispersion analysis has been completed on the X-33 software simulation. The simulation is based on a preliminary version of the software and is primarily used in an effort to define and refine how a Monte Carlo dispersion analysis would have been done on the final flight-ready version of the software. This report gives an overview of the processes used in the implementation of the dispersions and describes the methods used to accomplish the Monte Carlo analysis. Selected results from 1000 Monte Carlo runs are presented with suggestions for improvements in future work.

  18. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  19. Direct simulation Monte Carlo analysis on parallel processors

    NASA Technical Reports Server (NTRS)

    Wilmoth, Richard G.

    1989-01-01

    A method is presented for executing a direct simulation Monte Carlo (DSMC) analysis using parallel processing. The method is based on using domain decomposition to distribute the work load among multiple processors, and the DSMC analysis is performed completely in parallel. Message passing is used to transfer molecules between processors and to provide the synchronization necessary for the correct physical simulation. Benchmark problems are described for testing the method and results are presented which demonstrate the performance on two commercially available multicomputers. The results show that reasonable parallel speedup and efficiency can be obtained if the problem is properly sized to the number of processors. It is projected that with a massively parallel system, performance exceeding that of current supercomputers is possible.

  20. Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core

    NASA Astrophysics Data System (ADS)

    Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier

    2014-06-01

    As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard

  1. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael

    2007-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  2. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger

    2008-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  3. Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques

    SciTech Connect

    2014-12-01

    Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random number and for measuring the time of simulation.

  4. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    SciTech Connect

    Cullen, D.E

    2000-11-22

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  5. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  6. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry

    SciTech Connect

    Adamson, Justus; Newton, Joseph; Yang Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana

    2012-07-15

    Purpose: To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T and O) applicator using Monte Carlo calculation and 3D dosimetry. Methods: For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 Multiplication-Sign 10{sup 9} photon histories from a {sup 137}Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for {sup 137}Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE{sup Registered-Sign} dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5 Degree-Sign increments, and a 3D distribution was reconstructed with a (0.05 cm){sup 3} isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T and O implant plan. Results: The systematic difference in bucket angle relative to the nominal ovoid angle (105 Degree-Sign ) was 3.1 Degree-Sign -4.7 Degree-Sign . A systematic difference in bucket angle of 1 Degree-Sign , 5 Degree-Sign , and

  7. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  8. Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2012-06-15

    Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.

  9. Monte Carlo Simulations for Likelihood Analysis of the PEN experiment

    NASA Astrophysics Data System (ADS)

    Glaser, Charles; PEN Collaboration

    2017-01-01

    The PEN collaboration performed a precision measurement of the π+ ->e+νe(γ) branching ratio with the goal of obtaining a relative uncertainty of 5 ×10-4 or better at the Paul Scherrer Institute. A precision measurement of the branching ratio Γ(π -> e ν (γ)) / Γ(π -> μ ν (γ)) can be used to give mass bounds on ``new'', or non V -A, particles and interactions. This ratio also proves to be one of the most sensitive tests for lepton universality. The PEN detector consists of beam counters, an active target, a mini-time projection chamber, multi-wire proportional chamber, a plastic scintillating hodoscope, and a CsI electromagnetic calorimeter. The Geant4 Monte Carlo simulation is used to construct ultra-realistic events by digitizing energies and times, creating synthetic target waveforms, and fully accounting for photo-electron statistics. We focus on the detailed detector response to specific decay and background processes in order to sharpen the discrimination between them in the data analysis. Work supported by NSF grants PHY-0970013, 1307328, and others.

  10. Time series analysis of Monte Carlo neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Nease, Brian Robert

    A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.

  11. Monte Carlo Simulation of rainfall hyetographs for analysis and design

    NASA Astrophysics Data System (ADS)

    Kottegoda, N. T.; Natale, L.; Raiteri, E.

    2014-11-01

    Observations of high intensity rainfalls have been recorded at gauging stations in many parts of the world. In some instances the resulting data sets may not be sufficient in their scope and variability for purposes of analysis or design. By directly incorporating statistical properties of hyetographs with respect to the number of events per year, storm duration, peak intensity, cumulative rainfall and rising and falling limbs we develop a fundamentally basic procedure for Monte Carlo Simulation. Rainfall from Pavia and Milano in Lombardia region and from five gauging stations in the Piemonte region of northern Italy are used in this study. Firstly, we compare the hydrologic output from our model with that from other design storm methods for validation. Secondly, depth-duration-frequency curves are obtained from historical data and corresponding functions from simulated data are compared for further validation of the procedure. By adopting this original procedure one can simulate an unlimited range of realistic hydrographs that can be used in risk assessment. The potential for extension to ungauged catchments is shown.

  12. Spectrum simulation of rough and nanostructured targets from their 2D and 3D image by Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François; Chicoine, Martin

    2016-03-01

    Corteo is a program that implements Monte Carlo (MC) method to simulate ion beam analysis (IBA) spectra of several techniques by following the ions trajectory until a sufficiently large fraction of them reach the detector to generate a spectrum. Hence, it fully accounts for effects such as multiple scattering (MS). Here, a version of Corteo is presented where the target can be a 2D or 3D image. This image can be derived from micrographs where the different compounds are identified, therefore bringing extra information into the solution of an IBA spectrum, and potentially significantly constraining the solution. The image intrinsically includes many details such as the actual surface or interfacial roughness, or actual nanostructures shape and distribution. This can for example lead to the unambiguous identification of structures stoichiometry in a layer, or at least to better constraints on their composition. Because MC computes in details the trajectory of the ions, it simulates accurately many of its aspects such as ions coming back into the target after leaving it (re-entry), as well as going through a variety of nanostructures shapes and orientations. We show how, for example, as the ions angle of incidence becomes shallower than the inclination distribution of a rough surface, this process tends to make the effective roughness smaller in a comparable 1D simulation (i.e. narrower thickness distribution in a comparable slab simulation). Also, in ordered nanostructures, target re-entry can lead to replications of a peak in a spectrum. In addition, bitmap description of the target can be used to simulate depth profiles such as those resulting from ion implantation, diffusion, and intermixing. Other improvements to Corteo include the possibility to interpolate the cross-section in angle-energy tables, and the generation of energy-depth maps.

  13. Monte Carlo Analysis of Quantum Transport and Fluctuations in Semiconductors.

    DTIC Science & Technology

    1986-02-18

    methods to quantum transport within the Liouville formulation. The second part concerns with fluctuations of carrier velocities and energies both in...interactions) on the transport properties. Keywords: Monte Carlo; Charge Transport; Quantum Transport ; Fluctuations; Semiconductor Physics; Master Equation...The present report contains technical matter related to the research performed on two different subjects. The first part concerns with quantum

  14. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  15. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    NASA Astrophysics Data System (ADS)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum

  16. Analysis of hysteretic spin transition and size effect in 3D spin crossover compounds investigated by Monte Carlo Entropic sampling technique in the framework of the Ising-type model

    NASA Astrophysics Data System (ADS)

    Chiruta, D.; Linares, J.; Dahoo, P. R.; Dimian, M.

    2015-02-01

    In spin crossover (SCO) systems, the shape of the hysteresis curves are closely related to the interactions between the molecules, which these play an important role in the response of the system to an external parameter. The effects of short-range interactions on the different shape of the spin transition phenomena were investigated. In this contribution we solve the corresponding Hamiltonian for a three-dimensional SCO system taking into account short-range and long-range interaction using a biased Monte Carlo entropic sampling technique and a semi-analytical method. We discuss the competition between the two interactions which governs the low spin (LS) - high spin (HS) process for a three-dimensional network and the cooperative effects. We demonstrate a strong correlation between the shape of the transition and the strength of short-range interaction between molecules and we identified the role of the size for SCO systems.

  17. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  18. Uncertainty analysis for fluorescence tomography with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Reinbacher-Köstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann

    2011-07-01

    Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.

  19. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    SciTech Connect

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  20. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  1. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  2. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  3. Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations

    NASA Astrophysics Data System (ADS)

    Malladi, Mayank

    Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are

  4. Active neutron multiplicity analysis and Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Krick, M. S.; Ensslin, N.; Langner, D. G.; Miller, M. C.; Siebelist, R.; Stewart, J. E.; Ceo, R. N.; May, P. K.; Collins, L. L., Jr.

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined.

  5. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  6. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  7. Monte Carlo entropic sampling applied to Ising-like model for 2D and 3D systems

    NASA Astrophysics Data System (ADS)

    Jureschi, C. M.; Linares, J.; Dahoo, P. R.; Alayli, Y.

    2016-08-01

    In this paper we present the Monte Carlo entropic sampling (MCES) applied to an Ising-like model for 2D and 3D system in order to show the interaction influence of the edge molecules of the system with their local environment. We show that, as for the 1D and the 2D spin crossover (SCO) systems, the origin of multi steps transition in 3D SCO is the effect of the edge interaction molecules with its local environment together with short and long range interactions. Another important result worth noting is the co-existence of step transitions with hysteresis and without hysteresis. By increasing the value of the edge interaction, L, the transition is shifted to the lower temperatures: it means that the role of edge interaction is equivalent to an applied negative pressure because the edge interaction favours the HS state while the applied pressure favours the LS state. We also analyse, in this contribution, the role of the short- and long-range interaction, J respectively G, with respect to the environment interaction, L.

  8. Scaling/LER study of Si GAA nanowire FET using 3D finite element Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Elmessary, Muhammad A.; Nagy, Daniel; Aldegunde, Manuel; Seoane, Natalia; Indalecio, Guillermo; Lindberg, Jari; Dettmer, Wulf; Perić, Djordje; García-Loureiro, Antonio J.; Kalna, Karol

    2017-02-01

    3D Finite Element (FE) Monte Carlo (MC) simulation toolbox incorporating 2D Schrödinger equation quantum corrections is employed to simulate ID-VG characteristics of a 22 nm gate length gate-all-around (GAA) Si nanowire (NW) FET demonstrating an excellent agreement against experimental data at both low and high drain biases. We then scale the Si GAA NW according to the ITRS specifications to a gate length of 10 nm predicting that the NW FET will deliver the required on-current of above 1 mA/ μ m and a superior electrostatic integrity with a nearly ideal sub-threshold slope of 68 mV/dec and a DIBL of 39 mV/V. In addition, we use a calibrated 3D FE quantum corrected drift-diffusion (DD) toolbox to investigate the effects of NW line-edge roughness (LER) induced variability on the sub-threshold characteristics (threshold voltage (VT), OFF-current (IOFF), sub-threshold slope (SS) and drain-induced-barrier-lowering (DIBL)) for the 22 nm and 10 nm gate length GAA NW FETs at low and high drain biases. We simulate variability with two LER correlation lengths (CL = 20 nm and 10 nm) and three root mean square values (RMS = 0.6, 0.7 and 0.85 nm).

  9. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    PubMed Central

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-01-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477

  10. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  11. SU-E-T-35: An Investigation of the Accuracy of Cervical IMRT Dose Distribution Using 2D/3D Ionization Chamber Arrays System and Monte Carlo Simulation

    SciTech Connect

    Zhang, Y; Yang, J; Liu, H; Liu, D

    2014-06-01

    Purpose: The purpose of this work is to compare the verification results of three solutions (2D/3D ionization chamber arrays measurement and Monte Carlo simulation), the results will help make a clinical decision as how to do our cervical IMRT verification. Methods: Seven cervical cases were planned with Pinnacle 8.0m to meet the clinical acceptance criteria. The plans were recalculated in the Matrixx and Delta4 phantom with the accurate plans parameters. The plans were also recalculated by Monte Carlo using leaf sequences and MUs for individual plans of every patient, Matrixx and Delta4 phantom. All plans of Matrixx and Delta4 phantom were delivered and measured. The dose distribution of iso slice, dose profiles, gamma maps of every beam were used to evaluate the agreement. Dose-volume histograms were also compared. Results: The dose distribution of iso slice and dose profiles from Pinnacle calculation were in agreement with the Monte Carlo simulation, Matrixx and Delta4 measurement. A 95.2%/91.3% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Pinnacle distributions within 3mm/3% gamma criteria. A 96.4%/95.6% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Monte Carlo simulation within 2mm/2% gamma criteria, almost 100% gamma pass ratio within 3mm/3% gamma criteria. The DVH plot have slightly differences between Pinnacle and Delta4 measurement as well as Pinnacle and Monte Carlo simulation, but have excellent agreement between Delta4 measurement and Monte Carlo simulation. Conclusion: It was shown that Matrixx/Delta4 and Monte Carlo simulation can be used very efficiently to verify cervical IMRT delivery. In terms of Gamma value the pass ratio of Matrixx was little higher, however, Delta4 showed more problem fields. The primary advantage of Delta4 is the fact it can measure true 3D dosimetry while Monte Carlo can simulate in patients CT images but not in phantom.

  12. Quantification of stochastic uncertainty propagation for Monte Carlo depletion methods in reactor analysis

    NASA Astrophysics Data System (ADS)

    Newell, Quentin Thomas

    The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D fuel depletion analyses to compute quantities of interest in spent nuclear fuel including isotopic compositions. The Monte Carlo approach has not been fully embraced due to unresolved issues concerning the effect of Monte Carlo uncertainties on the predicted results. Use of the Monte Carlo method to solve the neutron transport equation introduces stochastic uncertainty in the computed fluxes. These fluxes are used to collapse cross sections, estimate power distributions, and deplete the fuel within depletion calculations; therefore, the predicted number densities contain random uncertainties from the Monte Carlo solution. These uncertainties can be compounded in time because of the extrapolative nature of depletion and decay calculations. The objective of this research was to quantify the stochastic uncertainty propagation of the flux uncertainty, introduced by the Monte Carlo method, to the number densities for the different isotopes in spent nuclear fuel due to multiple depletion time steps. The research derived a formula that calculates the standard deviation in the nuclide number densities based on propagating the statistical uncertainty introduced when using coupled Monte Carlo depletion computer codes. The research was developed with the use of the TRITON/KENO sequence of the SCALE computer code. The linear uncertainty nuclide group approximation (LUNGA) method developed in this research approximated the variance of ψN term, which is the variance in the flux shape due to uncertainty in the calculated nuclide number densities. Three different example problems were used in this research to calculate of the standard deviation in the nuclide number densities using the LUNGA method. The example problems showed that the LUNGA method is capable of calculating the standard deviation of the nuclide

  13. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Li, Ming; Huang, Xiaobo; Kang, Zhan

    2015-08-01

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  14. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    SciTech Connect

    Li, Ming; Kang, Zhan; Huang, Xiaobo

    2015-08-28

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  15. Method for Fast CT/SPECT-Based 3D Monte Carlo Absorbed Dose Computations in Internal Emitter Therapy

    PubMed Central

    Wilderman, S. J.; Dewaraja, Y. K.

    2010-01-01

    The DPM (Dose Planning Method) Monte Carlo electron and photon transport program, designed for fast computation of radiation absorbed dose in external beam radiotherapy, has been adapted to the calculation of absorbed dose in patient-specific internal emitter therapy. Because both its photon and electron transport mechanics algorithms have been optimized for fast computation in 3D voxelized geometries (in particular, those derived from CT scans), DPM is perfectly suited for performing patient-specific absorbed dose calculations in internal emitter therapy. In the updated version of DPM developed for the current work, the necessary inputs are a patient CT image, a registered SPECT image, and any number of registered masks defining regions of interest. DPM has been benchmarked for internal emitter therapy applications by comparing computed absorption fractions for a variety of organs using a Zubal phantom with reference results from the Medical Internal Radionuclide Dose (MIRD) Committee standards. In addition, the β decay source algorithm and the photon tracking algorithm of DPM have been further benchmarked by comparison to experimental data. This paper presents a description of the program, the results of the benchmark studies, and some sample computations using patient data from radioimmunotherapy studies using 131I. PMID:20305792

  16. A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code

    NASA Astrophysics Data System (ADS)

    Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.

    2013-12-01

    The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness, , and standard deviation of COT, were 3.0 and 4.3 for pristine case and 8.5 and 7.4 for polluted case, respectively. In the MIDPM method, we first construct a library of pair of observed vertical profiles from active sensors and collocated imager products at the nadir footprint, i.e. spectral imager radiances, cloud optical thickness (COT), effective particle radius (RE) and cloud top temperature (Tc). We then select a

  17. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    SciTech Connect

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  18. Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. G.; Normand, E.; Wilcox, A. D.

    1972-01-01

    In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.

  19. Adaptive sequential Monte Carlo for multiple changepoint analysis

    DOE PAGES

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less

  20. Adaptive sequential Monte Carlo for multiple changepoint analysis

    SciTech Connect

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be made adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.

  1. Cluster Analysis as a Method of Recovering Types of Intraindividual Growth Trajectories: A Monte Carlo Study.

    ERIC Educational Resources Information Center

    Dumenci, Levent; Windle, Michael

    2001-01-01

    Used Monte Carlo methods to evaluate the adequacy of cluster analysis to recover group membership based on simulated latent growth curve (LCG) models. Cluster analysis failed to recover growth subtypes adequately when the difference between growth curves was shape only. Discusses circumstances under which it was more successful. (SLD)

  2. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    ERIC Educational Resources Information Center

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  3. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry

    PubMed Central

    Barbeiro, A. R.; Ureba, A.; Baeza, J. A.; Linares, R.; Perucha, M.; Jiménez-Ortega, E.; Velázquez, S.; Mateos, J. C.

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  4. A 3D Monte Carlo model of radiation affecting cells, and its application to neuronal cells and GCR irradiation

    NASA Astrophysics Data System (ADS)

    Ponomarev, Artem; Sundaresan, Alamelu; Kim, Angela; Vazquez, Marcelo E.; Guida, Peter; Kim, Myung-Hee; Cucinotta, Francis A.

    A 3D Monte Carlo model of radiation transport in matter is applied to study the effect of heavy ion radiation on human neuronal cells. Central nervous system effects, including cognitive impairment, are suspected from the heavy ion component of galactic cosmic radiation (GCR) during space missions. The model can count, for instance, the number of direct hits from ions, which will have the most affect on the cells. For comparison, the remote hits, which are received through δ-rays from the projectile traversing space outside the volume of the cell, are also simulated and their contribution is estimated. To simulate tissue effects from irradiation, cellular matrices of neuronal cells, which were derived from confocal microscopy, were simulated in our model. To produce this realistic model of the brain tissue, image segmentation was used to identify cells in the images of cells cultures. The segmented cells were inserted pixel by pixel into the modeled physical space, which represents a volume of interacting cells with periodic boundary conditions (PBCs). PBCs were used to extrapolate the model results to the macroscopic tissue structures. Specific spatial patterns for cell apoptosis are expected from GCR, as heavy ions produce concentrated damage along their trajectories. The apoptotic cell patterns were modeled based on the action cross sections for apoptosis, which were estimated from the available experimental data. The cell patterns were characterized with an autocorrelation function, which values are higher for non-random cell patterns, and the values of the autocorrelation function were compared for X rays and Fe ion irradiations. The autocorrelation function indicates the directionality effects present in apoptotic neuronal cells from GCR.

  5. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry.

    PubMed

    Barbeiro, A R; Ureba, A; Baeza, J A; Linares, R; Perucha, M; Jiménez-Ortega, E; Velázquez, S; Mateos, J C; Leal, A

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  6. Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis

    ERIC Educational Resources Information Center

    Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.

    2010-01-01

    The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…

  7. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  8. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  9. Performance and accuracy of criticality calculations performed using WARP – A framework for continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs

    DOE PAGES

    Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...

    2017-05-01

    In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less

  10. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  11. Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  12. A numerical analysis method for evaluating rod lenses using the Monte Carlo method.

    PubMed

    Yoshida, Shuhei; Horiuchi, Shuma; Ushiyama, Zenta; Yamamoto, Manabu

    2010-12-20

    We propose a numerical analysis method for evaluating GRIN lenses using the Monte Carlo method. Actual measurements of the modulation transfer function (MTF) of a GRIN lens using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. However the results differ greatly from those from experiments. We therefore developed an evaluation method similar to the experimental system based on the Monte Carlo method and verified that it more closely matches the experimental results than the conventional method.

  13. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  14. Monte Carlo Analysis as a Trajectory Design Driver for the Transiting Exoplanet Survey Satellite (TESS) Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  15. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-11-15

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically.

  16. MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem

    SciTech Connect

    Kelly, D. J.; Sutton, T. M.; Wilson, S. C.

    2012-07-01

    Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)

  17. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images.

    PubMed

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2013-11-21

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 10(8) primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  18. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  19. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images

    NASA Astrophysics Data System (ADS)

    Botta, F.; Mairani, A.; Hobbs, R. F.; Vergara Gil, A.; Pacilio, M.; Parodi, K.; Cremonesi, M.; Coca Pérez, M. A.; Di Dia, A.; Ferrari, M.; Guerriero, F.; Battistoni, G.; Pedroli, G.; Paganelli, G.; Torres Aroche, L. A.; Sgouros, G.

    2013-11-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3-4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  20. Mission Command Analysis Using Monte Carlo Tree Search

    DTIC Science & Technology

    2013-06-14

    Modeling, Virtual Environments, and Simulation NPS Naval Postgraduate School TRAC Training and Doctrine Command Analysis Center TRAC- MRO Training and...Background In the fall of 2012, the Training and Doctrine Command Analysis Center (TRAC) Methods and Research Office (TRAC- MRO ) sponsored the Training and...Sponsor: Mr. Paul Works, TRAC Research Director, MRO . • Project Lead: MAJ Chris Marks (TRAC-MTRY). • Supporting Analyst: LTC John Alt (TRAC-MTRY

  1. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial

  2. Uncertainty optimization applied to the Monte Carlo analysis of planetary entry trajectories

    NASA Astrophysics Data System (ADS)

    Way, David Wesley

    2001-10-01

    Future robotic missions to Mars, as well as any human missions, will require precise entries to ensure safe landings near science objectives and pre-deployed assets. Planning for these missions will depend heavily on Monte Carlo analyses to evaluate active guidance algorithms, assess the impact of off-nominal conditions, and account for uncertainty. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecast output statistics. An improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively. This thesis proposes a methodology to optimize the uncertainties in the Monte Carlo analysis of spacecraft landing footprints. A metamodel is used to first write polynomial expressions for the size of the landing footprint as functions of the independent uncertainty extrema. The coefficients of the metamodel are determined by performing experiments. The metamodel is then used in a constrained optimization procedure to minimize a cost-tolerance function. First, a two-dimensional proof-of-concept problem was used to evaluate the feasibility of this optimization method. Next, the optimization method was further demonstrated on the Mars Surveyor Program 2001 Lander. The purpose of this example was to demonstrate that the methodology developed during the proof-of-concept could be scaled to solve larger, more complicated, "real world" problems. This research has shown that is possible to control the size of the landing footprint and establish tolerances for mission uncertainties. A simplified metamodel was developed, which is enabling for realistic problems with more than just a few uncertainties. A confidence interval on

  3. Microdosimetry of alpha particles for simple and 3D voxelised geometries using MCNPX and Geant4 Monte Carlo codes.

    PubMed

    Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A

    2012-07-01

    Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is <6 %. For voxelised phantom, the study of the voxel size highlighted that the shape of the curve f(1)(z) obtained with MCNPX for <1 µm voxel size presents a significant difference with the shape of non-voxelised geometry. When using Geant4, little differences are observed whatever the voxel size is. Below 1 µm, the use of Geant4 is required. However, the calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.

  4. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    SciTech Connect

    Fallahpoor, M; Abbasi, M; Sen, A; Parach, A; Kalantari, F

    2015-06-15

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  5. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    SciTech Connect

    Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  6. Monte Carlo Neutronics and Thermal Hydraulics Analysis of Reactor Cores with Multilevel Grids

    NASA Astrophysics Data System (ADS)

    Bernnat, W.; Mattes, M.; Guilliard, N.; Lapins, J.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2014-06-01

    Power reactors are composed of assemblies with fuel pin lattices or other repeated structures with several grid levels, which can be modeled in detail by Monte Carlo neutronics codes such as MCNP6 using corresponding lattice options, even for large cores. Except for fresh cores at beginning of life, there is a varying material distribution due to burnup in the different fuel pins. Additionally, for power states the fuel and moderator temperatures and moderator densities vary according to the power distribution and cooling conditions. Therefore, a coupling of the neutronics code with a thermal hydraulics code is necessary. Depending on the level of detail of the analysis, a very large number of cells with different materials and temperatures must be regarded. The assignment of different material properties to all elements of a multilevel grid is very elaborate and may exceed program limits if the standard input procedure is used. Therefore, an internal assignment is used which overrides uniform input parameters. The temperature dependency of continuous energy cross sections, probability tables for the unresolved resonance region and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. The method is applied with MCNP6 and proven for several full core reactor models. For the coupling of MCNP6 with thermal hydraulics appropriate interfaces were developed for the GRS system code ATHLET for liquid coolant and the IKE thermal hydraulics code ATTICA-3D for gaseous coolant. Examples will be shown for different applications for PWRs with square and hexagonal lattices, fast reactors (SFR) with hexagonal lattices and HTRs with pebble bed and prismatic lattices.

  7. The influence of the IMRT QA set-up error on the 2D and 3D gamma evaluation method as obtained by using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk

    2015-11-01

    The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.

  8. Verification and validation of a parallel 3D direct simulation Monte Carlo solver for atmospheric entry applications

    NASA Astrophysics Data System (ADS)

    Nizenkov, Paul; Noeding, Peter; Konopka, Martin; Fasoulas, Stefanos

    2017-03-01

    The in-house direct simulation Monte Carlo solver PICLas, which enables parallel, three-dimensional simulations of rarefied gas flows, is verified and validated. Theoretical aspects of the method and the employed schemes are briefly discussed. Considered cases include simple reservoir simulations and complex re-entry geometries, which were selected from literature and simulated with PICLas. First, the chemistry module is verified using simple numerical and analytical solutions. Second, simulation results of the rarefied gas flow around a 70° blunted-cone, the REX Free-Flyer as well as multiple points of the re-entry trajectory of the Orion capsule are presented in terms of drag and heat flux. A comparison to experimental measurements as well as other numerical results shows an excellent agreement across the different simulation cases. An outlook on future code development and applications is given.

  9. Verification and validation of a parallel 3D direct simulation Monte Carlo solver for atmospheric entry applications

    NASA Astrophysics Data System (ADS)

    Nizenkov, Paul; Noeding, Peter; Konopka, Martin; Fasoulas, Stefanos

    2016-07-01

    The in-house direct simulation Monte Carlo solver PICLas, which enables parallel, three-dimensional simulations of rarefied gas flows, is verified and validated. Theoretical aspects of the method and the employed schemes are briefly discussed. Considered cases include simple reservoir simulations and complex re-entry geometries, which were selected from literature and simulated with PICLas. First, the chemistry module is verified using simple numerical and analytical solutions. Second, simulation results of the rarefied gas flow around a 70° blunted-cone, the REX Free-Flyer as well as multiple points of the re-entry trajectory of the Orion capsule are presented in terms of drag and heat flux. A comparison to experimental measurements as well as other numerical results shows an excellent agreement across the different simulation cases. An outlook on future code development and applications is given.

  10. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  11. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results.

  12. Monte-Carlo Analysis of the Flavour Changing Neutral Current B \\to Gamma at Babar

    SciTech Connect

    Smith, D.

    2001-09-01

    The main theme of this thesis is a Monte-Carlo analysis of the rare Flavour Changing Neutral Current (FCNC) decay b→sγ. The analysis develops techniques that could be applied to real data, to discriminate between signal and background events in order to make a measurement of the branching ratio of this rare decay using the BaBar detector. Also included in this thesis is a description of the BaBar detector and the work I have undertaken in the development of the electronic data acquisition system for the Electromagnetic calorimeter (EMC), a subsystem of the BaBar detector.

  13. 3D Direct Simulation Monte Carlo Modelling of the Inner Gas Coma of Comet 67P/Churyumov-Gerasimenko: A Parameter Study

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.

    2016-03-01

    Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.

  14. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  15. Nuclear spectroscopy for in situ soil elemental analysis: Monte Carlo simulations

    SciTech Connect

    Wielopolski L.; Doron, O.

    2012-07-01

    We developed a model to simulate a novel inelastic neutron scattering (INS) system for in situ non-destructive analysis of soil using standard Monte Carlo Neutron Photon (MCNP5a) transport code. The volumes from which 90%, 95%, and 99% of the total signal are detected were estimated to be 0.23 m{sup 3}, 0.37 m{sup 3}, and 0.79 m{sup 3}, respectively. Similarly, we assessed the instrument's sampling footprint and depths. In addition we discuss the impact of the carbon's depth distribution on sampled depth.

  16. Full-Band Monte Carlo Analysis of Hot-Carrier Light Emission in GaAs

    NASA Astrophysics Data System (ADS)

    Ferretti, I.; Abramo, A.; Brunetti, R.; Jacobini, C.

    1997-11-01

    A computational analysis of light emission from hot carriers in GaAs due to direct intraband conduction-conduction (c-c) transitions is presented. The emission rates have been evaluated by means of a Full-Band Monte-Carlo simulator (FBMC). Results have been obtained for the emission rate as a function of the photon energy, for the emitted and absorbed light polarization along and perpendicular to the electric field direction. Comparison has been made with available experimental data in MESFETs.

  17. Markov chain Monte Carlo linkage analysis of a complex qualitative phenotype.

    PubMed

    Hinrichs, A; Lin, J H; Reich, T; Bierut, L; Suarez, B K

    1999-01-01

    We tested a new computer program, LOKI, that implements a reversible jump Markov chain Monte Carlo (MCMC) technique for segregation and linkage analysis. Our objective was to determine whether this software, designed for use with continuously distributed phenotypes, has any efficacy when applied to the discrete disease states of the simulated data from the Mordor data from GAW Problem 1. Although we were able to identify the genomic location for two of the three quantitative trait loci by repeated application of the software, the MCMC sampler experienced significant mixing problems indicating that the method, as currently formulated in LOKI, was not suitable for the discrete phenotypes in this data set.

  18. Analysis of light propagation in highly scattering media by path-length-assigned Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ishii, Katsuhiro; Nishidate, Izumi; Iwai, Toshiaki

    2014-05-01

    Numerical analysis of optical propagation in highly scattering media is investigated when light is normally incident to the surface and re-emerges backward from the same point. This situation corresponds to practical light scattering setups, such as in optical coherence tomography. The simulation uses the path-length-assigned Monte Carlo method based on an ellipsoidal algorithm. The spatial distribution of the scattered light is determined and the dependence of its width and penetration depth on the path-length is found. The backscattered light is classified into three types, in which ballistic, snake, and diffuse photons are dominant.

  19. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  20. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  1. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    SciTech Connect

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  2. Monte Carlo analysis of uncertainties in the Netherlands greenhouse gas emission inventory for 1990-2004

    NASA Astrophysics Data System (ADS)

    Ramírez, Andrea; de Keizer, Corry; Van der Sluijs, Jeroen P.; Olivier, Jos; Brandes, Laurens

    This paper presents an assessment of the value added of a Monte Carlo analysis of the uncertainties in the Netherlands inventory of greenhouse gases over a Tier 1 analysis. It also examines which parameters contributed the most to the total emission uncertainty and identified areas of high priority for the further improvement of the accuracy and quality of the inventory. The Monte Carlo analysis resulted in an uncertainty range in total GHG emissions of 4.1% in 2004 and 5.4% in 1990 (with LUCF) and 5.3% (in 1990) and 3.9% (in 2004) for GHG emissions without LUCF. Uncertainty in the trend was estimated at 4.5%. The values are in the same order of magnitude as those estimated in the Tier 1. The results show that accounting for correlation among parameters is important, and for the Netherlands inventory it has a larger impact on the uncertainty in the trend than on the uncertainty in the total GHG emissions. The main contributors to overall uncertainty are found to be related to N 2O emissions from agricultural soils, the N 2O implied emission factors of Nitric Acid Production, CH 4 from managed solid waste disposal on land, and the implied emission factor of CH 4 from manure management from cattle.

  3. Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra

    SciTech Connect

    Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor

    2009-09-15

    The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.

  4. Modeling intermittent generation (IG) in a Monte-Carlo regional system analysis model

    SciTech Connect

    Yamayee, Z.A.

    1984-01-01

    A simulation model capable of simulating the operation of a given load/resource scenario is developed under the umbrella of PNUCC's System Analysis Committee. This model, called System Analysis Model (SAM), employs the Monte-Carlo technique to incorporate quantifiable uncertainties. Explicit uncertainties in SAM include: hydro conditions, load forecast errors, construction duration, availability of thermal units, renewable resources (wind, solar, geothermal, and biomass), cogeneration, and conservation. This paper presents an approach to modeling renewable resources, especially wind energy availability. Due to randomness of wind velocity at a wind site, and randomness from one site to another, it is important to have a model of uncertain wind energy availability. The model starts with historical hourly wind data at each site in the area covered by the Pacific Northwest Power Act (7). Using wind data, machine and site characteristics, along with Justus, et al. time series model for simulating hourly wind power, hourly energy for each site is calculated. Assuming independence between different sites, a probability density function for each month is computed. These density functions along with a uniformly distributed random number generator are used to draw observed seasonal and/or monthly energy for each of the Monte-Carlo games. The monthly observed energy along with a typical hourly shape for a month are used to calculate hourly observed wind energy for the hourly portion of SAM. A sample case study is made to show the approach.

  5. Enhancing backbone sampling in Monte Carlo simulations using internal coordinates normal mode analysis.

    PubMed

    Gil, Victor A; Lecina, Daniel; Grebner, Christoph; Guallar, Victor

    2016-10-15

    Normal mode methods are becoming a popular alternative to sample the conformational landscape of proteins. In this study, we describe the implementation of an internal coordinate normal mode analysis method and its application in exploring protein flexibility by using the Monte Carlo method PELE. This new method alternates two different stages, a perturbation of the backbone through the application of torsional normal modes, and a resampling of the side chains. We have evaluated the new approach using two test systems, ubiquitin and c-Src kinase, and the differences to the original ANM method are assessed by comparing both results to reference molecular dynamics simulations. The results suggest that the sampled phase space in the internal coordinate approach is closer to the molecular dynamics phase space than the one coming from a Cartesian coordinate anisotropic network model. In addition, the new method shows a great speedup (∼5-7×), making it a good candidate for future normal mode implementations in Monte Carlo methods.

  6. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    SciTech Connect

    Pratama, Cecep; Meilano, Irwan; Nugraha, Andri Dian

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  7. Monte Carlo models and analysis of galactic disk gamma-ray burst distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon

    1989-01-01

    Gamma-ray bursts are transient astronomical phenomena which have no quiescent counterparts in any region of the electromagnetic spectrum. Although temporal and spectral properties indicate that these events are likely energetic, their unknown spatial distribution complicates astrophysical interpretation. Monte Carlo samples of gamma-ray burst sources are created which belong to Galactic disk populations. Spatial analysis techniques are used to compare these samples to the observed distribution. From this, both quantitative and qualitative conclusions are drawn concerning allowed luminosity and spatial distributions of the actual sample. Although the Burst and Transient Source Experiment (BATSE) experiment on Gamma Ray Observatory (GRO) will significantly improve knowledge of the gamma-ray burst source spatial characteristics within only a few months of launch, the analysis techniques described herein will not be superceded. Rather, they may be used with BATSE results to obtain detailed information about both the luminosity and spatial distributions of the sources.

  8. A bottom collider vertex detector design, Monte-Carlo simulation and analysis package

    SciTech Connect

    Lebrun, P.

    1990-10-01

    A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.

  9. Is anoxic depolarisation associated with an ADC threshold? A Markov chain Monte Carlo analysis.

    PubMed

    King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G

    2005-12-01

    A Bayesian nonlinear hierarchical random coefficients model was used in a reanalysis of a previously published longitudinal study of the extracellular direct current (DC)-potential and apparent diffusion coefficient (ADC) responses to focal ischaemia. The main purpose was to examine the data for evidence of an ADC threshold for anoxic depolarisation. A Markov chain Monte Carlo simulation approach was adopted. The Metropolis algorithm was used to generate three parallel Markov chains and thus obtain a sampled posterior probability distribution for each of the DC-potential and ADC model parameters, together with a number of derived parameters. The latter were used in a subsequent threshold analysis. The analysis provided no evidence indicating a consistent and reproducible ADC threshold for anoxic depolarisation.

  10. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  11. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  12. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    SciTech Connect

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  13. Converting Boundary Representation Solid Models to Half-Space Representation Models for Monte Carlo Analysis

    SciTech Connect

    Davis JE, Eddy MJ, Sutton TM, Altomari TJ

    2007-03-01

    Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.

  14. Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount

    SciTech Connect

    Supriyadi; Srigutomo, Wahyu; Munandar, Arif

    2014-03-24

    Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.

  15. STS-1 operational flight profile. Volume 5: Descent, cycle 3. Appendix C: Monte Carlo dispersion analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of three nonlinear the Monte Carlo dispersion analyses for the Space Transportation System 1 Flight (STS-1) Orbiter Descent Operational Flight Profile, Cycle 3 are presented. Fifty randomly selected simulation for the end of mission (EOM) descent, the abort once around (AOA) descent targeted line are steep target line, and the AOA descent targeted to the shallow target line are analyzed. These analyses compare the flight environment with system and operational constraints on the flight environment and in some cases use simplified system models as an aid in assessing the STS-1 descent flight profile. In addition, descent flight envelops are provided as a data base for use by system specialists to determine the flight readiness for STS-1. The results of these dispersion analyses supersede results of the dispersion analysis previously documented.

  16. Techno-economic and Monte Carlo probabilistic analysis of microalgae biofuel production system.

    PubMed

    Batan, Liaw Y; Graff, Gregory D; Bradley, Thomas H

    2016-11-01

    This study focuses on the characterization of the technical and economic feasibility of an enclosed photobioreactor microalgae system with annual production of 37.85 million liters (10 million gallons) of biofuel. The analysis characterizes and breaks down the capital investment and operating costs and the production cost of unit of algal diesel. The economic modelling shows total cost of production of algal raw oil and diesel of $3.46 and $3.69 per liter, respectively. Additionally, the effects of co-products' credit and their impact in the economic performance of algal-to-biofuel system are discussed. The Monte Carlo methodology is used to address price and cost projections and to simulate scenarios with probabilities of financial performance and profits of the analyzed model. Different markets for allocation of co-products have shown significant shifts for economic viability of algal biofuel system.

  17. Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI

    SciTech Connect

    Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.

    2008-04-15

    A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.

  18. Contrast to Noise Ratio and Contrast Detail Analysis in Mammography:A Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Metaxas, V.; Delis, H.; Kalogeropoulou, C.; Zampakis, P.; Panayiotakis, G.

    2015-09-01

    The mammographic spectrum is one of the major factors affecting image quality in mammography. In this study, a Monte Carlo (MC) simulation model was used to evaluate image quality characteristics of various mammographic spectra. The anode/filter combinations evaluated, were those traditionally used in mammography, for tube voltages between 26 and 30 kVp. The imaging performance was investigated in terms of Contrast to Noise Ratio (CNR) and Contrast Detail (CD) analysis, by involving human observers, utilizing a mathematical CD phantom. Soft spectra provided the best characteristics in terms of both CNR and CD scores, while tube voltage had a limited effect. W-anode spectra filtered with k-edge filters demonstrated an improved performance, that sometimes was better compared to softer x-ray spectra, produced by Mo or Rh anode. Regarding the filter material, k-edge filters showed superior performance compared to Al filters.

  19. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    SciTech Connect

    Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.

    2013-07-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  20. Monte Carlo Analysis of the Commissioning Phase Maneuvers of the Soil Moisture Active Passive (SMAP) Mission

    NASA Technical Reports Server (NTRS)

    Williams, Jessica L.; Bhat, Ramachandra S.; You, Tung-Han

    2012-01-01

    The Soil Moisture Active Passive (SMAP) mission will perform soil moisture content and freeze/thaw state observations from a low-Earth orbit. The observatory is scheduled to launch in October 2014 and will perform observations from a near-polar, frozen, and sun-synchronous Science Orbit for a 3-year data collection mission. At launch, the observatory is delivered to an Injection Orbit that is biased below the Science Orbit; the spacecraft will maneuver to the Science Orbit during the mission Commissioning Phase. The delta V needed to maneuver from the Injection Orbit to the Science Orbit is computed statistically via a Monte Carlo simulation; the 99th percentile delta V (delta V99) is carried as a line item in the mission delta V budget. This paper details the simulation and analysis performed to compute this figure and the delta V99 computed per current mission parameters.

  1. Monte Carlo Example Programs

    SciTech Connect

    Kalos, M.

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  2. 3D numerical modelling of the steady-state thermal regime constrained by surface heat flow data: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Cruden, A. R.

    2014-12-01

    Uncertainty of the lithospheric thermal regime greatly increases with depth. Measurements of temperature gradient and crustal rheology are concentrated in the upper crust, whereas the majority of the lithospheric measurements are approximated using empirical depth-dependent functions. We have applied a Monte Carlo approach to test the variation of crustal heat flow with temperature-dependent conductivity and the redistribution of heat-producing elements. The dense population of precision heat flow data in Victoria, Southeast Australia offers the ideal environment to test the variation of heat flow. A stochastically consistent anomalous zone of impossibly high Moho temperatures in the 3D model (> 900°C) correlates well with a zone of low teleseismic velocity and high electrical conductivity. This indicates that transient heat transfer has perturbed the thermal gradient and therefore a steady-state approach to 3D modelling is inappropriate in this zone. A spatial correlation between recent intraplate volcanic eruption points (< 5 Ma) and elevated Moho temperatures is a potential origin for additional latent heat in the crust.

  3. Monte Carlo - Metropolis Investigations of Shape and Matrix Effects in 2D and 3D Spin-Crossover Nanoparticles

    NASA Astrophysics Data System (ADS)

    Guerroudj, Salim; Caballero, Rafael; De Zela, Francisco; Jureschi, Catalin; Linares, Jorge; Boukheddaden, Kamel

    2016-08-01

    The Ising like model, taking into account short-, long-range interaction as well as surface effects is used to investigate size and shape effects on the thermal behaviour of 2D and 3D spin crossover (SCO) nanoparticles embedded in a matrix. We analyze the role of the parametert, representing the ratio between the number of surface and volume molecules, on the unusual thermal hysteresis behaviour (appearance of the hysteresis and a re-entrance phase transition) at small scales.

  4. Generation of SFR few-group constants using the Monte Carlo code Serpent

    SciTech Connect

    Fridman, E.; Rachamin, R.; Shwageraus, E.

    2013-07-01

    In this study, the Serpent Monte Carlo code was used as a tool for preparation of homogenized few-group cross sections for the nodal diffusion analysis of Sodium cooled Fast Reactor (SFR) cores. Few-group constants for two reference SFR cores were generated by Serpent and then employed by nodal diffusion code DYN3D in 2D full core calculations. The DYN3D results were verified against the references full core Serpent Monte Carlo solutions. A good agreement between the reference Monte Carlo and nodal diffusion results was observed demonstrating the feasibility of using Serpent for generation of few-group constants for the deterministic SFR analysis. (authors)

  5. Evaluation of a 3D point spread function (PSF) model derived from Monte Carlo simulation for a small animal PET scanner

    NASA Astrophysics Data System (ADS)

    Yao, Rutao; Ramachandra, Ranjith M.; Panse, Ashish; Balla, Deepika; Yan, Jianhua; Carson, Richard E.

    2010-04-01

    We previously designed a component based 3-D PSF model to obtain a compact yet accurate system matrix for a dedicated human brain PET scanner. In this work, we adapted the model to a small animal PET scanner. Based on the model, we derived the system matrix for back-to-back gamma source in air, fluorine-18 and iodine-124 source in water by Monte Carlo simulation. The characteristics of the PSF model were evaluated and the performance of the newly derived system matrix was assessed by comparing its reconstructed images with the established reconstruction program provided on the animal PET scanner. The new system matrix showed strong PSF dependency on the line-of-response (LOR) incident angle and LOR depth. This confirmed the validity of the two components selected for the model. The effect of positron range on the system matrix was observed by comparing the PSFs of different isotopes. A simulated and an experimental hot-rod phantom study showed that the reconstruction with the proposed system matrix achieved better resolution recovery as compared to the algorithm provided by the manufacturer. Quantitative evaluation also showed better convergence to the expected contrast value at similar noise level. In conclusion, it has been shown that the system matrix derivation method is applicable to the animal PET system studied, suggesting that the method may be used for other PET systems and different isotope applications.

  6. 3D visualisation of the stochastic patterns of the radial dose in nano-volumes by a Monte Carlo simulation of HZE ion track structure.

    PubMed

    Plante, Ianik; Ponomarev, Artem; Cucinotta, Francis A

    2011-02-01

    The description of energy deposition by high charge and energy (HZE) nuclei is of importance for space radiation risk assessment and due to their use in hadrontherapy. Such ions deposit a large fraction of their energy within the so-called core of the track and a smaller proportion in the penumbra (or track periphery). We study the stochastic patterns of the radial dependence of energy deposition using Monte Carlo track structure codes RITRACKS and RETRACKS, that were used to simulate HZE tracks and calculate energy deposition in voxels of 40 nm. The simulation of a (56)Fe(26+) ion of 1 GeV u(-1) revealed zones of high-energy deposition which maybe found as far as a few millimetres away from the track core in some simulations. The calculation also showed that ∼43 % of the energy was deposited in the penumbra. These 3D stochastic simulations combined with a visualisation interface are a powerful tool for biophysicists which may be used to study radiation-induced biological effects such as double strand breaks and oxidative damage and the subsequent cellular and tissue damage processing and signalling.

  7. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  8. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei

    2014-09-01

    Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  9. Derivation of landslide-triggering thresholds by Monte Carlo simulation and ROC analysis

    NASA Astrophysics Data System (ADS)

    Peres, David Johnny; Cancelliere, Antonino

    2015-04-01

    Rainfall thresholds of landslide-triggering are useful in early warning systems to be implemented in prone areas. Direct statistical analysis of historical records of rainfall and landslide data presents different shortcomings typically due to incompleteness of landslide historical archives, imprecise knowledge of the triggering instants, unavailability of a rain gauge located near the landslides, etc. In this work, a Monte Carlo approach to derive and evaluate landslide triggering thresholds is presented. Such an approach contributes to overcome some of the above mentioned shortcomings of direct empirical analysis of observed data. The proposed Monte Carlo framework consists in the combination of a rainfall stochastic model with hydrological and slope-stability model. Specifically, 1000-years long hourly synthetic rainfall and related slope stability factor of safety data are generated by coupling the Neyman-Scott rectangular pulses model with the TRIGRS unsaturated model (Baum et al., 2008) and a linear-reservoir water table recession model. Triggering and non-triggering rainfall events are then distinguished and analyzed to derive stochastic-input physically based thresholds that optimize the trade-off between correct and wrong predictions. For this purpose, receiver operating characteristic (ROC) indices are used. An application of the method to the highly landslide-prone area of the Peloritani mountains in north-eastern Sicily (Italy) is carried out. A threshold for the area is derived and successfully validated by comparison with thresholds proposed by other researchers. Moreover, the uncertainty in threshold derivation due to variability of rainfall intensity within events and to antecedent rainfall is investigated. Results indicate that variability of intensity during rainfall events influences significantly rainfall intensity and duration associated with landslide triggering. A representation of rainfall as constant-intensity hyetographs globally leads to

  10. Empirical Markov Chain Monte Carlo Bayesian analysis of fMRI data.

    PubMed

    de Pasquale, F; Del Gratta, C; Romani, G L

    2008-08-01

    In this work an Empirical Markov Chain Monte Carlo Bayesian approach to analyse fMRI data is proposed. The Bayesian framework is appealing since complex models can be adopted in the analysis both for the image and noise model. Here, the noise autocorrelation is taken into account by adopting an AutoRegressive model of order one and a versatile non-linear model is assumed for the task-related activation. Model parameters include the noise variance and autocorrelation, activation amplitudes and the hemodynamic response function parameters. These are estimated at each voxel from samples of the Posterior Distribution. Prior information is included by means of a 4D spatio-temporal model for the interaction between neighbouring voxels in space and time. The results show that this model can provide smooth estimates from low SNR data while important spatial structures in the data can be preserved. A simulation study is presented in which the accuracy and bias of the estimates are addressed. Furthermore, some results on convergence diagnostic of the adopted algorithm are presented. To validate the proposed approach a comparison of the results with those from a standard GLM analysis, spatial filtering techniques and a Variational Bayes approach is provided. This comparison shows that our approach outperforms the classical analysis and is consistent with other Bayesian techniques. This is investigated further by means of the Bayes Factors and the analysis of the residuals. The proposed approach applied to Blocked Design and Event Related datasets produced reliable maps of activation.

  11. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  12. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    PubMed

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  13. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  14. A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers

    NASA Technical Reports Server (NTRS)

    Green, R. N.; Hoffman, L. H.; Young, G. R.

    1972-01-01

    A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.

  15. Ligand-receptor binding kinetics in surface plasmon resonance cells: a Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Carroll, Jacob; Raum, Matthew; Forsten-Williams, Kimberly; Täuber, Uwe C.

    2016-12-01

    Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of spatio-temporal correlations in binary reaction kinetics in SPR cell geometries, and demonstrate the failure of a mean-field analysis of SPR cells in the regime of high Damköhler number {{Da}}\\gt 0.1, where the spatio-temporal correlations due to diffusive transport and ligand-receptor rebinding events dominate the dynamics of SPR systems.

  16. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  17. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    PubMed

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci.

  18. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    SciTech Connect

    Hall, Howard L

    2012-01-01

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  19. Propensity score applied to survival data analysis through proportional hazards models: a Monte Carlo study.

    PubMed

    Gayat, Etienne; Resche-Rigon, Matthieu; Mary, Jean-Yves; Porcher, Raphaël

    2012-01-01

    Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity-score-matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity-score-matched data, using a robust estimator of the variance.

  20. Cluster Monte Carlo and numerical mean field analysis for the water liquid-liquid phase transition

    NASA Astrophysics Data System (ADS)

    Mazza, Marco G.; Stokely, Kevin; Strekalova, Elena G.; Stanley, H. Eugene; Franzese, Giancarlo

    2009-04-01

    Using Wolff's cluster Monte Carlo simulations and numerical minimization within a mean field approach, we study the low temperature phase diagram of water, adopting a cell model that reproduces the known properties of water in its fluid phases. Both methods allow us to study the thermodynamic behavior of water at temperatures, where other numerical approaches - both Monte Carlo and molecular dynamics - are seriously hampered by the large increase of the correlation times. The cluster algorithm also allows us to emphasize that the liquid-liquid phase transition corresponds to the percolation transition of tetrahedrally ordered water molecules.

  1. Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano

    2014-02-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 σ of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.

  2. Monte Carlo analysis of dissociation and recombination behind strong shock waves in nitrogen

    NASA Technical Reports Server (NTRS)

    Boyd, I. D.

    1991-01-01

    Computations are presented for the relaxation zone behind strong, 1D shock waves in nitrogen. The analysis is performed with the direct simulation Monte Carlo method (DSMC). The DSMC code is vectorized for efficient use on a supercomputer. The code simulates translational, rotational and vibrational energy exchange and dissociative and recombinative chemical reactions. A model is proposed for the treatment of three body-recombination collisions in the DSMC technique which usually simulates binary collision events. The model improves previous models because it can be employed with a large range of chemical-rate data, does not introduce into the flow field troublesome pairs of atoms which may recombine upon further collision (pseudoparticles) and is compatible with the vectorized code. The computational results are compared with existing experimental data. It is shown that the derivation of chemical-rate coefficients must account for the degree of vibrational nonequilibrium in the flow. A nonequilibrium-chemistry model is employed together with equilibrium-rate data to compute the flow in several different nitrogen shock waves.

  3. Personalized Analysis by Validation of Monte Carlo for Application of Pathways in Cardioembolic Stroke.

    PubMed

    Xing, Zhangmin; Luan, Bin; Zhao, Ruiying; Li, Zhanbiao; Sun, Guojian

    2017-02-24

    BACKGROUND Cardioembolic stroke (CES), which causes 20% cause of all ischemic strokes, is associated with high mortality. Previous studies suggest that pathways play a critical role in the identification and pathogenesis of diseases. We aimed to develop an integrated approach that is able to construct individual networks of pathway cross-talk to quantify differences between patients with CES and controls. MATERIAL AND METHODS One biological data set E-GEOD-58294 was used, including 23 normal controls and 59 CES samples. We used individualized pathway aberrance score (iPAS) to assess pathway statistics of 589 Ingenuity Pathways Analysis (IPA) pathways. Random Forest (RF) classification was implemented to calculate the AUC of every network. These procedures were tested by Monte Carlo Cross-Validation for 50 bootstraps. RESULTS A total of 28 networks with AUC >0.9 were found between CES and controls. Among them, 3 networks with AUC=1.0 had the best performance for classification in 50 bootstraps. The 3 pathway networks were able to significantly identify CES versus controls, which showed as biomarkers in the regulation and development of CES. CONCLUSIONS This novel approach could identify 3 networks able to accurately classify CES and normal samples in individuals. This integrated application needs to be validated in other diseases.

  4. Markov chain Monte Carlo linkage analysis: effect of bin width on the probability of linkage.

    PubMed

    Slager, S L; Juo, S H; Durner, M; Hodge, S E

    2001-01-01

    We analyzed part of the Genetic Analysis Workshop (GAW) 12 simulated data using Monte Carlo Markov chain (MCMC) methods that are implemented in the computer program Loki. The MCMC method reports the "probability of linkage" (PL) across the chromosomal regions of interest. The point of maximum PL can then be taken as a "location estimate" for the location of the quantitative trait locus (QTL). However, Loki does not provide a formal statistical test of linkage. In this paper, we explore how the bin width used in the calculations affects the max PL and the location estimate. We analyzed age at onset (AO) and quantitative trait number 5, Q5, from 26 replicates of the general simulated data in one region where we knew a major gene, MG5, is located. For each trait, we found the max PL and the corresponding location estimate, using four different bin widths. We found that bin width, as expected, does affect the max PL and the location estimate, and we recommend that users of Loki explore how their results vary with different bin widths.

  5. A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data

    SciTech Connect

    Garner, James R; Whitaker, J Michael

    2013-01-01

    As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.

  6. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    SciTech Connect

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakage frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.

  7. Monte Carlo analysis of the enhanced transcranial penetration using distributed near-infrared emitter array.

    PubMed

    Yue, Lan; Humayun, Mark S

    2015-08-01

    Transcranial near-infrared (NIR) treatment of neurological diseases has gained recent momentum. However, the low NIR dose available to the brain, which shows severe scattering and absorption of the photons by human tissues, largely limits its effectiveness in clinical use. Hereby, we propose to take advantage of the strong scattering effect of the cranial tissues by applying an evenly distributed multiunit emitter array on the scalp to enhance the cerebral photon density while maintaining each single emitter operating under the safe thermal limit. By employing the Monte Carlo method, we simulated the transcranial propagation of the array emitted light and demonstrated markedly enhanced intracranial photon flux as well as improved uniformity of the photon distribution. These enhancements are correlated with the source location, density, and wavelength of light. To the best of our knowledge, we present the first systematic analysis of the intracranial light field established by the scalp-applied multisource array and reveal a strategy for the optimization of the therapeutic effects of the NIR radiation.

  8. Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection

    SciTech Connect

    Billard, J.; Mayet, F.; Santos, D.

    2011-04-01

    Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.

  9. Error analysis and tolerance allocation for confocal scanning microscopy using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Yoo, Hongki; Kang, Dong-Kyun; Lee, SeungWoo; Lee, Junhee; Gweon, Dae-Gab

    2004-07-01

    The errors can cause the serious loss of the performance of a precision machine system. In this paper, we propose the method of allocating the alignment tolerances of the components and apply this method to Confocal Scanning Microscopy (CSM) to get the optimal tolerances. CSM uses confocal aperture, which blocks the out-of-focus information. Thus, it provides images with superior resolution and has unique property of optical sectioning. Recently, due to these properties, it has been widely used for measurement in biological field, medical science, material science and semiconductor industry. In general, tight tolerances are required to maintain the performance of a system, but a high cost of manufacturing and assembling is required to preserve the tight tolerances. The purpose of allocating the optimal tolerances is minimizing the cost while keeping the performance of the system. In the optimal problem, we set the performance requirements as constraints and maximized the tolerances. The Monte Carlo Method, a statistical simulation method, is used in tolerance analysis. Alignment tolerances of optical components of the confocal scanning microscopy are optimized, to minimize the cost and to maintain the observation performance of the microscopy. We can also apply this method to the other precision machine system.

  10. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  11. Parametric analysis of intercellular ice propagation during cryosurgery, simulated using monte carlo techniques.

    PubMed

    Stott, Shannon L; Irimia, Daniel; Karlsson, Jens O M

    2004-04-01

    A microscale theoretical model of intracellular ice formation (IIF) in a heterogeneous tissue volume comprising a tumor mass and surrounding normal tissue is presented. Intracellular ice was assumed to form either by intercellular ice propagation or by processes that are not affected by the presence of ice in neighboring cells (e.g., nucleation or mechanical rupture). The effects of cryosurgery on a 2D tissue consisting of 10(4) cells were simulated using a lattice Monte Carlo technique. A parametric analysis was performed to assess the specificity of IIF-related cell damage and to identify criteria for minimization of collateral damage to the healthy tissue peripheral to the tumor. Among the parameters investigated were the rates of interaction-independent IIF and intercellular ice propagation in the tumor and in the normal tissue, as well as the characteristic length scale of thermal gradients in the vicinity of the cryosurgical probe. Model predictions suggest gap junctional intercellular communication as a potential new target for adjuvant therapies complementing the cryosurgical procedure.

  12. Personalized Analysis by Validation of Monte Carlo for Application of Pathways in Cardioembolic Stroke

    PubMed Central

    Xing, Zhangmin; Luan, Bin; Zhao, Ruiying; Li, Zhanbiao; Sun, Guojian

    2017-01-01

    Background Cardioembolic stroke (CES), which causes 20% cause of all ischemic strokes, is associated with high mortality. Previous studies suggest that pathways play a critical role in the identification and pathogenesis of diseases. We aimed to develop an integrated approach that is able to construct individual networks of pathway cross-talk to quantify differences between patients with CES and controls. Material/Methods One biological data set E-GEOD-58294 was used, including 23 normal controls and 59 CES samples. We used individualized pathway aberrance score (iPAS) to assess pathway statistics of 589 Ingenuity Pathways Analysis (IPA) pathways. Random Forest (RF) classification was implemented to calculate the AUC of every network. These procedures were tested by Monte Carlo Cross-Validation for 50 bootstraps. Results A total of 28 networks with AUC >0.9 were found between CES and controls. Among them, 3 networks with AUC=1.0 had the best performance for classification in 50 bootstraps. The 3 pathway networks were able to significantly identify CES versus controls, which showed as biomarkers in the regulation and development of CES. Conclusions This novel approach could identify 3 networks able to accurately classify CES and normal samples in individuals. This integrated application needs to be validated in other diseases. PMID:28232661

  13. Melanin and blood concentration in a human skin model studied by multiple regression analysis: assessment by Monte Carlo simulation.

    PubMed

    Shimada, M; Yamada, Y; Itoh, M; Yatagai, T

    2001-09-01

    Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.

  14. Melanin and blood concentration in a human skin model studied by multiple regression analysis: assessment by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Shimada, M.; Yamada, Y.; Itoh, M.; Yatagai, T.

    2001-09-01

    Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.

  15. A Refinement of Risk Analysis Procedures for Trichloroethylene Through the Use of Monte Carlo Method in Conjunction with Physiologically Based Pharmacokinetic Modeling

    DTIC Science & Technology

    1993-09-01

    This study refines risk analysis procedures for trichloroethylene (TCE) using a physiologically based pharmacokinetic (PBPK) model in conjunction...promulgate, and better present, more realistic standards.... Risk analysis , Physiologically based pharmacokinetics, Pbpk, Trichloroethylene, Monte carlo method.

  16. Monte Carlo simulations of GeoPET experiments: 3D images of tracer distributions (18F, 124I and 58Co) in Opalinus clay, anhydrite and quartz

    NASA Astrophysics Data System (ADS)

    Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna

    2013-08-01

    Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.

  17. TH-C-12A-08: New Compact 10 MV S-Band Linear Accelerator: 3D Finite-Element Design and Monte Carlo Dose Simulations

    SciTech Connect

    Baillie, D; St Aubin, J; Fallone, B; Steciw, S

    2014-06-15

    Purpose: To design a new compact S-band linac waveguide capable of producing a 10 MV x-ray beam, while maintaining the length (27.5 cm) of current 6 MV waveguides. This will allow higher x-ray energies to be used in our linac-MRI systems with the same footprint. Methods: Finite element software COMSOL Multiphysics was used to design an accelerator cavity matching one published in an experiment breakdown study, to ensure that our modeled cavities do not exceed the threshold electric fields published. This cavity was used as the basis for designing an accelerator waveguide, where each cavity of the full waveguide was tuned to resonate at 2.997 GHz by adjusting the cavity diameter. The RF field solution within the waveguide was calculated, and together with an electron-gun phase space generated using Opera3D/SCALA, were input into electron tracking software PARMELA to compute the electron phase space striking the x-ray target. This target phase space was then used in BEAM Monte Carlo simulations to generate percent depth doses curves for this new linac, which were then used to re-optimize the waveguide geometry. Results: The shunt impedance, Q-factor, and peak-to-mean electric field ratio were matched to those published for the breakdown study to within 0.1% error. After tuning the full waveguide, the peak surface fields are calculated to be 207 MV/m, 13% below the breakdown threshold, and a d-max depth of 2.42 cm, a D10/20 value of 1.59, compared to 2.45 cm and 1.59, respectively, for the simulated Varian 10 MV linac and brehmsstrahlung production efficiency 20% lower than a simulated Varian 10 MV linac. Conclusion: This work demonstrates the design of a functional 27.5 cm waveguide producing 10 MV photons with characteristics similar to a Varian 10 MV linac.

  18. A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.

    PubMed

    McCandless, Lawrence C; Gustafson, Paul

    2017-04-06

    Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    NASA Astrophysics Data System (ADS)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10–20% the Day30 urethra D10 dose metric is higher by 4.2%–10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  20. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    PubMed

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators.

  1. Identification of Thyroid Receptor Ant/Agonists in Water Sources Using Mass Balance Analysis and Monte Carlo Simulation

    PubMed Central

    Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia

    2013-01-01

    Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563

  2. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  3. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis.

    PubMed

    Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian

    2017-01-28

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  4. A Monte Carlo approach to Beryllium-7 solar neutrino analysis with KamLAND

    NASA Astrophysics Data System (ADS)

    Grant, Christopher Peter

    Terrestrial measurements of neutrinos produced by the Sun have been of great interest for over half a century because of their ability to test the accuracy of solar models. The first solar neutrinos detected with KamLAND provided a measurement of the 8B solar neutrino interaction rate above an analysis threshold of 5.5 MeV. This work describes efforts to extend KamLAND's detection sensitivity to solar neutrinos below 1 MeV, more specifically, those produced with an energy of 0.862 MeV from the 7Be electron-capture decay. Many of the difficulties in measuring solar neutrinos below 1 MeV arise from backgrounds caused abundantly by both naturally occurring, and man-made, radioactive nuclides. The primary nuclides of concern were 210Bi, 85Kr, and 39Ar. Since May of 2007, the KamLAND experiment has undergone two separate purification campaigns. During both campaigns a total of 5.4 ktons (about 6440 m3) of scintillator was circulated through a purification system, which utilized fractional distillation and nitrogen purging. After the purification campaign, reduction factors of 1.5 x 103 for 210Bi and 6.5 x 10 4 for 85Kr were observed. The reduction of the backgrounds provided a unique opportunity to observe the 7Be solar neutrino rate in KamLAND. An observation required detailed knowledge of the detector response at low energies, and to accomplish this, a full detector Monte Carlo simulation, called KLG4sim, was utilized. The optical model of the simulation was tuned to match the detector response observed in data after purification, and the software was optimized for the simulation of internal backgrounds used in the 7Be solar neutrino analysis. The results of this tuning and estimates from simulations of the internal backgrounds and external backgrounds caused by radioactivity on the detector components are presented. The first KamLAND analysis based on Monte Carlo simulations in the energy region below 2 MeV is shown here. The comparison of the chi2 between the null

  5. Investing in a robotic milking system: a Monte Carlo simulation analysis.

    PubMed

    Hyde, J; Engel, P

    2002-09-01

    This paper uses Monte Carlo simulation methods to estimate the breakeven value for a robotic milking system (RMS) on a dairy farm in the United States. The breakeven value indicates the maximum amount that could be paid for the robots given the costs of alternative milking equipment and other important factors (e.g., milk yields, prices, length of useful life of technologies). The analysis simulates several scenarios under three herd sizes, 60, 120, and 180 cows. The base-case results indicate that the mean breakeven values are $192,056, $374,538, and $553,671 for each of the three progressively larger herd sizes. These must be compared to the per-unit RMS cost (about $125,000 to $150,000) and the cost of any construction or installation of other equipment that accompanies the RMS. Sensitivity analysis shows that each additional dollar spent on milking labor in the parlor increases the breakeven value by $4.10 to $4.30. Each dollar increase in parlor costs increases the breakeven value by $0.45 to $0.56. Also, each additional kilogram of initial milk production (under a 2x system in the parlor) decreases the breakeven by $9.91 to $10.64. Finally, each additional year of useful life for the RMS increases the per-unit breakeven by about $16,000 while increasing the life of the parlor by 1 yr decreases the breakeven value by between $5,000 and $6,000.

  6. Hierarchical Monte Carlo modeling with S-distributions: Concepts and illustrative analysis of mercury contamination in King Mackerel

    SciTech Connect

    Voit, E.O.; Balthis, W.L.; Holser, R.A.

    1995-12-31

    The quantitative assessment of environmental contaminants is a complex process. It involves nonlinear models and the characterization of variables, factors, and parameters that are distributed and dependent on each other. Assessments based on point estimates are easy to perform, but since they are unreliable, Monte Carlo simulations have become a standard procedure. Simulations pose two challenges: They require the numerical characterization of parameter distributions and they do not account for dependencies between parameters. This paper offers strategies for dealing with both challenges. The first part discusses the characterization of data with the S-distribution. This distribution offers several advantages, which include simplicity of numerical analysis, flexibility in shape, and easy computation of quantiles. The second part outlines how the S-distribution can be used for hierarchical Monte Carlo simulations. In these simulations the selection of parameter values occurs sequentially, and each choice depends on the parameter values selected before. The method is illustrated with preliminary simulation analyses that are concerned with mercury contamination in king mackerel (Scomberomorus cavalla). It is demonstrated that the results of such hierarchical simulations are generally different from those of traditional Monte Carlo simulations.

  7. Spray cooling simulation implementing time scale analysis and the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Kreitzer, Paul Joseph

    Spray cooling research is advancing the field of heat transfer and heat rejection in high power electronics. Smaller and more capable electronics packages are producing higher amounts of waste heat, along with smaller external surface areas, and the use of active cooling is becoming a necessity. Spray cooling has shown extremely high levels of heat rejection, of up to 1000 W/cm 2 using water. Simulations of spray cooling are becoming more realistic, but this comes at a price. A previous researcher has used CFD to successfully model a single 3D droplet impact into a liquid film using the level set method. However, the complicated multiphysics occurring during spray impingement and surface interactions increases computation time to more than 30 days. Parallel processing on a 32 processor system has reduced this time tremendously, but still requires more than a day. The present work uses experimental and computational results in addition to numerical correlations representing the physics occurring on a heated impingement surface. The current model represents the spray behavior of a Spraying Systems FullJet 1/8-g spray nozzle. Typical spray characteristics are indicated as follows: flow rate of 1.05x10-5 m3/s, normal droplet velocity of 12 m/s, droplet Sauter mean diameter of 48 microm, and heat flux values ranging from approximately 50--100 W/cm2 . This produces non-dimensional numbers of: We 300--1350, Re 750--3500, Oh 0.01--0.025. Numerical and experimental correlations have been identified representing crater formation, splashing, film thickness, droplet size, and spatial flux distributions. A combination of these methods has resulted in a Monte Carlo spray impingement simulation model capable of simulating hundreds of thousands of droplet impingements or approximately one millisecond. A random sequence of droplet impingement locations and diameters is generated, with the proper radial spatial distribution and diameter distribution. Hence the impingement, lifetime

  8. Methods for modeling non-equilibrium degenerate statistics and quantum-confined scattering in 3D ensemble Monte Carlo transport simulations

    NASA Astrophysics Data System (ADS)

    Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.

    2016-12-01

    Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical

  9. PROMSAR: A backward Monte Carlo spherical RTM for the analysis of DOAS remote sensing measurements

    NASA Astrophysics Data System (ADS)

    Palazzi, E.; Petritoli, A.; Giovanelli, G.; Kostadinov, I.; Bortoli, D.; Ravegnani, F.; Sackey, S. S.

    A correct interpretation of diffuse solar radiation measurements made by Differential Optical Absorption Spectroscopy (DOAS) remote sensors require the use of radiative transfer models of the atmosphere. The simplest models consider radiation scattering in the atmosphere as a single scattering process. More realistic atmospheric models are those which consider multiple scattering and their application is useful and essential for the analysis of zenith and off-axis measurements regarding the lowest layers of the atmosphere, such as the boundary layer. These are characterized by the highest values of air density and quantities of particles and aerosols acting as scattering nuclei. A new atmospheric model, PROcessing of Multi-Scattered Atmospheric Radiation (PROMSAR), which includes multiple Rayleigh and Mie scattering, has recently been developed at ISAC-CNR. It is based on a backward Monte Carlo technique which is very suitable for studying the various interactions taking place in a complex and non-homogeneous system like the terrestrial atmosphere. PROMSAR code calculates the mean path of the radiation within each layer in which the atmosphere is sub-divided taking into account the large variety of processes that solar radiation undergoes during propagation through the atmosphere. This quantity is then employed to work out the Air Mass Factor (AMF) of several trace gases, to simulate in zenith and off-axis configurations their slant column amounts and to calculate the weighting functions from which informations about the gas vertical distribution is obtained using inversion methods. Results from the model, simulations and comparisons with actual slant column measurements are presented and discussed.

  10. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    NASA Astrophysics Data System (ADS)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  11. Monte carlo analysis of two-photon fluorescence imaging through a scattering medium.

    PubMed

    Blanca, C M; Saloma, C

    1998-12-01

    The behavior of two-photon fluorescence imaging through a scattering medium is analyzed by use of the Monte Carlo technique. The axial and transverse distributions of the excitation photons in the focused Gaussian beam are derived for both isotropic and anisotropic scatterers at different numerical apertures and at various ratios of the scattering depth with the mean free path. The two-photon fluorescence profiles of the sample are determined from the square of the normalized excitation intensity distributions. For the same lens aperture and scattering medium, two-photon fluorescence imaging offers a sharper and less aberrated axial response than that of single-photon confocal fluorescence imaging. The contrast in the corresponding transverse fluorescence profile is also significantly higher. Also presented are results comparing the effects of isotropic and anisotropic scattering media in confocal reflection imaging. The convergence properties of the Monte Carlo simulation are also discussed.

  12. Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

    SciTech Connect

    Dupuis, Paul

    2014-03-14

    This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.

  13. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    SciTech Connect

    Ivanov, A.; Sanchez, V.; Hoogenboom, J. E.

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  14. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2013-07-15

    Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p

  15. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  16. Reassessing benzene risks using internal doses and Monte-Carlo uncertainty analysis.

    PubMed Central

    Cox, L A

    1996-01-01

    Human cancer risks from benzene have been estimated from epidemiological data, with supporting evidence from animal bioassay data. This article reexamines the animal-based risk assessments using physiologically based pharmacokinetic (PBPK) models of benzene metabolism in animals and humans. Internal doses (total benzene metabolites) from oral gavage experiments in mice are well predicted by the PBPK model. Both the data and the PBPK model outputs are also well described by a simple nonlinear (Michaelis-Menten) regression model, as previously used by Bailer and Hoel [Metabolite-based internal doses used in risk assessment of benzene. Environ Health Perspect 82:177-184 (1989)]. Refitting the multistage model family to internal doses changes the maximum-likelihood estimate (MLE) dose-response curve for mice from linear-quadratic to purely cubic, so that low-dose risk estimates are smaller than in previous risk assessments. In contrast to Bailer and Hoel's findings using interspecies dose conversion, the use of internal dose estimates for humans from a PBPK model reduces estimated human risks at low doses. Sensitivity analyses suggest that the finding of a nonlinear MLE dose-response curve at low doses is robust to changes in internal dose definitions and more consistent with epidemiological data than earlier risk models. A Monte-Carlo uncertainty analysis based on maximum-entropy probabilities and Bayesian conditioning is used to develop an entire probability distribution for the true but unknown dose-response function. This allows the probability of a positive low-dose slope to be quantified: It is about 10%. An upper 95% confidence limit on the low-dose slope of excess risk is also obtained directly from the posterior distribution and is similar to previous q1* values. This approach suggests that the excess risk due to benzene exposure may be nonexistent (or even negative) at sufficiently low doses. Two types of biological information about benzene effects

  17. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC

  18. Image guided radiation therapy applications for head and neck, prostate, and breast cancers using 3D ultrasound imaging and Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Fraser, Danielle

    In radiation therapy an uncertainty in the delivered dose always exists because anatomic changes are unpredictable and patient specific. Image guided radiation therapy (IGRT) relies on imaging in the treatment room to monitor the tumour and surrounding tissue to ensure their prescribed position in the radiation beam. The goal of this thesis was to determine the dosimetric impact on the misaligned radiation therapy target for three cancer sites due to common setup errors; organ motion, tumour tissue deformation, changes in body habitus, and treatment planning errors. For this purpose, a novel 3D ultrasound system (Restitu, Resonant Medical, Inc.) was used to acquire a reference image of the target in the computed tomography simulation room at the time of treatment planning, to acquire daily images in the treatment room at the time of treatment delivery, and to compare the daily images to the reference image. The measured differences in position and volume between daily and reference geometries were incorporated into Monte Carlo (MC) dose calculations. The EGSnrc (National Research Council, Canada) family of codes was used to model Varian linear accelerators and patient specific beam parameters, as well as to estimate the dose to the target and organs at risk under several different scenarios. After validating the necessity of MC dose calculations in the pelvic region, the impact of interfraction prostate motion, and subsequent patient realignment under the treatment beams, on the delivered dose was investigated. For 32 patients it is demonstrated that using 3D conformal radiation therapy techniques and a 7 mm margin, the prescribed dose to the prostate, rectum, and bladder is recovered within 0.5% of that planned when patient setup is corrected for prostate motion, despite the beams interacting with a new external surface and internal tissue boundaries. In collaboration with the manufacturer, the ultrasound system was adapted from transabdominal imaging to neck

  19. A Monte Carlo analysis of health risks from PCB-contaminated mineral oil transformer fires.

    PubMed

    Eschenroeder, A Q; Faeder, E J

    1988-06-01

    The objective of this study is the estimation of health hazards due to the inhalation of combustion products from accidental mineral oil transformer fires. Calculations of production, dispersion, and subsequent human intake of polychlorinated dibenzofurans (PCDFs) provide us with exposure estimates. PCDFs are believed to be the principal toxic products of the pyrolysis of polychlorinated biphenyls (PCBs) sometimes found as contaminants in transformer mineral oil. Cancer burdens and birth defect hazard indices are estimated from population data and exposure statistics. Monte Carlo-derived variational factors emphasize the statistics of uncertainty in the estimates of risk parameters. Community health issues are addressed and risks are found to be insignificant.

  20. Monte Carlo analysis of lobular gas-surface scattering in tubes applied to thermal transpiration

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Raquet, C. A.

    1972-01-01

    A model of rarefied gas flow in tubes was developed which combines a lobular distribution with diffuse reflection at the wall. The model with Monte Carlo techniques was used to explain previously observed deviations in the free molecular thermal transpiration ratio which suggest molecules can have a greater tube transmission probability in a hot-to-cold direction than in a cold-to-hot direction. The model yields correct magnitudes of transmission probability ratios for helium in Pyrex tubing (1.09 to 1.14), and some effects of wall-temperature distribution, tube surface roughness, tube dimensions, gas temperature, and gas molecular mass.

  1. Comparison of marker types and map assumptions using Markov chain Monte Carlo-based linkage analysis of COGA data.

    PubMed

    Sieh, Weiva; Basu, Saonli; Fu, Audrey Q; Rothstein, Joseph H; Scheet, Paul A; Stewart, William C L; Sung, Yun J; Thompson, Elizabeth A; Wijsman, Ellen M

    2005-12-30

    We performed multipoint linkage analysis of the electrophysiological trait ECB21 on chromosome 4 in the full pedigrees provided by the Collaborative Study on the Genetics of Alcoholism (COGA). Three Markov chain Monte Carlo (MCMC)-based approaches were applied to the provided and re-estimated genetic maps and to five different marker panels consisting of microsatellite (STRP) and/or SNP markers at various densities. We found evidence of linkage near the GABRB1 STRP using all methods, maps, and marker panels. Difficulties encountered with SNP panels included convergence problems and demanding computations.

  2. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    SciTech Connect

    McGraw, David; Hershey, Ronald L.

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  3. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    SciTech Connect

    Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.

    2012-07-01

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

  4. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  5. Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

    2009-01-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  6. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    NASA Astrophysics Data System (ADS)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  7. Effect of the T-gate on the performance of recessed HEMTs. A Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Mateos, Javier; González, Tomás; Pardo, Daniel; Hoel, Virginie; Cappy, Alain

    1999-09-01

    A microscopic study of 0.1 µm recessed gate icons/Journals/Common/delta" ALT="delta" ALIGN="TOP"/>-doped AlInAs/GaInAs HEMTs has been performed by using a semiclassical Monte Carlo device simulation. The geometry and layer structure of the simulated HEMT is completely realistic, including recessed gate and icons/Journals/Common/delta" ALT="delta" ALIGN="TOP"/>-doping configuration. The usual T-gate technology is used to improve the device characteristics by reducing the gate resistance. For first time we take into account in the Monte Carlo simulations the effect of the T-gate and the dielectric used to passivate the device surface, which affects considerably the electric field distribution inside the device. The measured Id-Vds characteristics of a real device are favourably compared with the simulation results. When comparing the complete simulation with the case in which Poisson equation is solved only inside the semiconductor, we find that even if the static I-V characteristics remain practically unchanged, important differences appear in the dynamic and noise behaviour, reflecting the influence of an additional capacitance.

  8. Excited Rotational States in Doped {4} He Clusters: a Diffusion Monte Carlo Analysis

    NASA Astrophysics Data System (ADS)

    Coccia, Emanuele

    2017-03-01

    We report an extension of diffusion Monte Carlo (DMC) to the calculation of the molecular rotational energies by means of the generalized, symmetry-adapted, imaginary-time correlation functions (SAITCFs) originally introduced in the reptation quantum Monte Carlo (RQMC) framework (Škrbić in J Phys Chem A 111:12749, 2007). We studied the a-type and b-type rotational lines of the CO(4 He)N clusters with N= 1-8 that correlate, in the dimer limit, with the end-over-end and free-rotor transitions. We compare the SAITCF-DMC results with accurate DVR (for the dimer case), RQMC and other DMC data, and with reference experimental findings (Surin in Phys Rev Lett 101:233401, 2008). A good agreement is generally found, but a systematic underestimation of the SAITCF-DMC rotational energies of the b-type series is observed. Sources of inaccuracy in our theoretical approach and in the computational protocol are discussed and analyzed in detail.

  9. Sensitivity analysis of an asymmetric Monte Carlo beam model of a Siemens Primus accelerator.

    PubMed

    Schreiber, Eric C; Sawkey, Daren L; Faddegon, Bruce A

    2012-03-08

    The assumption of cylindrical symmetry in radiotherapy accelerator models can pose a challenge for precise Monte Carlo modeling. This assumption makes it difficult to account for measured asymmetries in clinical dose distributions. We have performed a sensitivity study examining the effect of varying symmetric and asymmetric beam and geometric parameters of a Monte Carlo model for a Siemens PRIMUS accelerator. The accelerator and dose output were simulated using modified versions of BEAMnrc and DOSXYZnrc that allow lateral offsets of accelerator components and lateral and angular offsets for the incident electron beam. Dose distributions were studied for 40 × 40 cm² fields. The resulting dose distributions were analyzed for changes in flatness, symmetry, and off-axis ratio (OAR). The electron beam parameters having the greatest effect on the resulting dose distributions were found to be electron energy and angle of incidence, as high as 5% for a 0.25° deflection. Electron spot size and lateral offset of the electron beam were found to have a smaller impact. Variations in photon target thickness were found to have a small effect. Small lateral offsets of the flattening filter caused significant variation to the OAR. In general, the greatest sensitivity to accelerator parameters could be observed for higher energies and off-axis ratios closer to the central axis. Lateral and angular offsets of beam and accelerator components have strong effects on dose distributions, and should be included in any high-accuracy beam model.

  10. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  11. Monte Carlo based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2013-12-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The

  12. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    SciTech Connect

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the

  13. Förster resonance energy transfer and trapping in selected systems: analysis by Monte-Carlo simulation.

    PubMed

    Bojarski, P; Synak, A; Kułak, L; Rangelowa-Jankowska, S; Kubicki, A; Grobelna, B

    2012-01-01

    Monte-Carlo simulation method is described and applied as an efficient tool to analyze experimental data in the presence of energy transfer in selected systems, where the use of analytical approaches is limited or even impossible. Several numerical and physical problems accompanying Monte-Carlo simulation are addressed. It is shown that the Monte-Carlo simulation enables to obtain orientation factor in partly ordered systems and other important energy transfer parameters unavailable directly from experiments. It is shown how Monte-Carlo simulation can predict some important features of energy transport like its directional character in ordered media.

  14. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    PubMed

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  15. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in process optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.

  16. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  17. A Monte Carlo Power Analysis of Traditional Repeated Measures and Hierarchical Multivariate Linear Models in Longitudinal Data Analysis.

    PubMed

    Fang, Hua; Brooks, Gordon P; Rizzo, Maria L; Espy, Kimberly A; Barcikowski, Robert S

    2008-01-01

    The power properties of traditional repeated measures and hierarchical linear models have not been clearly determined in the balanced design for longitudinal studies in the current literature. A Monte Carlo power analysis of traditional repeated measures and hierarchical multivariate linear models are presented under three variance-covariance structures. Results suggest that traditional repeated measures have higher power than hierarchical linear models for main effects, but lower power for interaction effects. Significant power differences are also exhibited when power is compared across different covariance structures. Results also supplement more comprehensive empirical indexes for estimating model precision via bootstrap estimates and the approximate power for both main effects and interaction tests under standard model assumptions.

  18. Magnetic force imaging of a chain of biogenic magnetite and Monte Carlo analysis of tip-particle interaction

    NASA Astrophysics Data System (ADS)

    Körnig, André; Hartmann, Markus A.; Teichert, Christian; Fratzl, Peter; Faivre, Damien

    2014-06-01

    Magnetotactic bacteria form chains of magnetite nanoparticles that serve the organism as navigation tools. The magnetic anisotropy of the superstructure makes the chain an ideal model to study the magnetic properties of such an organization. Magnetic force microscopy (MFM) is currently the technique of choice for the visualization of magnetic nanostructures, however it does not enable the quantitative measurement of magnetic properties, since the interactions between the MFM probe and the magnetic sample are complex and not yet fully understood. Here we present an MFM study of such a chain of biological magnetite nanoparticles. We combined experimental and theoretical (Monte Carlo simulation) analyses of the sample, and investigated the size and orientation of the magnetic moments of the single magnetic particles in the chain. MonteCarlo simulations were used to calculate the influence of the magnetic tip on the configuration of the sample. The advantage of this procedure is that analysis does not require any a priori knowledge of the properties of the sample. The magnetic properties of the tip and of the magnetosomes are indeed varied in the calculations until the phase profiles of the simulated MFM images achieve a best match with the experimental ones. We hope our results will open the doors towards a better quantification of MFM images, and possibly a better understanding of the biological process in situ.

  19. Qualitative analysis of irregular fields delivered with dual electron multileaf collimator: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Inyang, Samuel Okon; Chamberlain, Alan

    2016-03-01

    The use of a dual electron multileaf collimator (eMLC) to collimate therapeutic electron beam without the use of cutouts has been previously shown to be feasible. Further Monte Carlo simulations were performed in this study to verify the nature and appearance of the isodose distribution in water phantom of irregular electron beams delivered by the eMLC. Electron fields used in this study were selected to reflect those used in electron beam therapy. Results of this study show that the isodose distribution in a water phantom obtained from the simulation of irregular electron beams through the eMLC conforms to the pattern of the eMLC used in the delivery of the beam. It is therefore concluded that the dual eMLC could deliver isodose distributions reflecting the pattern of the eMLC field that was used in the delivery of the beam.

  20. Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1991-01-01

    A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.

  1. Photoelectric Franck-Hertz experiment and its kinetic analysis by Monte Carlo simulation.

    PubMed

    Magyar, Péter; Korolov, Ihor; Donkó, Zoltán

    2012-05-01

    The electrical characteristics of a photoelectric Franck-Hertz cell are measured in argon gas over a wide range of pressure, covering conditions where elastic collisions play an important role, as well as conditions where ionization becomes significant. Photoelectron pulses are induced by the fourth harmonic UV light of a diode-pumped Nd:YAG laser. The electron kinetics, which is far more complex compared to the naive picture of the Franck-Hertz experiment, is analyzed via Monte Carlo simulation. The computations provide the electrical characteristics of the cell, the energy and velocity distribution functions, and the transport parameters of the electrons, as well as the rate coefficients of different elementary processes. A good agreement is obtained between the cell's measured and calculated electrical characteristics, the peculiarities of which are understood by the simulation studies.

  2. A Monte Carlo template based analysis for air-Cherenkov arrays

    NASA Astrophysics Data System (ADS)

    Parsons, R. D.; Hinton, J. A.

    2014-04-01

    We present a high-performance event reconstruction algorithm: an Image Pixel-wise fit for Atmospheric Cherenkov Telescopes (ImPACT). The reconstruction algorithm is based around the likelihood fitting of camera pixel amplitudes to an expected image template. A maximum likelihood fit is performed to find the best-fit shower parameters. A related reconstruction algorithm has already been shown to provide significant improvements over traditional reconstruction for both the CAT and H.E.S.S. experiments. We demonstrate a significant improvement to the template generation step of the procedure, by the use of a full Monte Carlo air shower simulation in combination with a ray-tracing optics simulation to more accurately model the expected camera images. This reconstruction step is combined with an MVA-based background rejection.

  3. Reaction cross sections for two direct simulation Monte Carlo models: Accuracy and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Wysong, Ingrid; Gimelshein, Sergey; Gimelshein, Natalia; McKeon, William; Esposito, Fabrizio

    2012-04-01

    The quantum kinetic chemical reaction model proposed by Bird for the direct simulation Monte Carlo method is based on collision kinetics with no assumed Arrhenius-related parameters. It demonstrates an excellent agreement with the best estimates for thermal reaction rates coefficients and with two-temperature nonequilibrium rate coefficients for high-temperature air reactions. This paper investigates this model further, concentrating on the non-thermal reaction cross sections as a function of collision energy, and compares its predictions with those of the earlier total collision energy model, also by Bird, as well as with available quasi-classical trajectory cross section predictions (this paper also publishes for the first time a table of these computed reaction cross sections). A rarefied hypersonic flow over a cylinder is used to examine the sensitivity of the number of exchange reactions to the differences in the two models under a strongly nonequilibrium velocity distribution.

  4. Analysis of quantum Monte Carlo dynamics for quantum adiabatic evolution in infinite-range spin systems

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-Ichi

    2011-03-01

    We analytically derive deterministic equations of order parameters such as spontaneous magnetization in infinite-range quantum spin systems obeying quantum Monte Carlo dynamics. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. We discuss several possible applications of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we argue the ground state searching for infinite-range random spin systems via quantum adiabatic evolution. We were financially supported by Grant-in-Aid for Scientific Research (C) of Japan Society for the Promotion of Science, No. 22500195.

  5. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation

    PubMed Central

    Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun

    2015-01-01

    The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695

  6. A Monte Carlo analysis of the liquid xenon TPC as gamma ray telescope

    NASA Technical Reports Server (NTRS)

    Aprile, E.; Bolotnikov, A.; Chen, D.; Mukherjee, R.

    1992-01-01

    Extensive Monte Carlo modeling of a coded aperture x ray telescope based on a high resolution liquid xenon TPC has been performed. Results on efficiency, background reduction capability and source flux sensitivity are presented. We discuss in particular the development of a reconstruction algorithm for events with multiple interaction points. From the energy and spatial information, the kinematics of Compton scattering is used to identify and reduce background events, as well as to improve the detector response in the few MeV region. Assuming a spatial resolution of 1 mm RMS and an energy resolution of 4.5 percent FWHM at 1 MeV, the algorithm is capable of reducing by an order of magnitude the background rate expected at balloon altitude, thus significantly improving the telescope sensitivity.

  7. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  8. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  9. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    SciTech Connect

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-12-31

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  10. MED-3DMC: a new tool to generate 3D conformation ensembles of small molecules with a Monte Carlo sampling of the conformational space.

    PubMed

    Sperandio, Olivier; Souaille, Marc; Delfaud, François; Miteva, Maria A; Villoutreix, Bruno O

    2009-04-01

    Obtaining an efficient sampling of the low to medium energy regions of a ligand conformational space is of primary importance for getting insight into relevant binding modes of drug candidates, or for the screening of rigid molecular entities on the basis of a predefined pharmacophore or for rigid body docking. Here, we report the development of a new computer tool that samples the conformational space by using the Metropolis Monte Carlo algorithm combined with the MMFF94 van der Waals energy term. The performances of the program have been assessed on 86 drug-like molecules that resulted from an ADME/tox profiling applied on cocrystalized small molecules and were compared with the program Omega on the same dataset. Our program has also been assessed on the 85 molecules of the Astex diverse set. Both test sets show convincing performance of our program at sampling the conformational space.

  11. Combining the diffusion approximation and Monte Carlo modeling in analysis of diffuse reflectance spectra from human skin

    NASA Astrophysics Data System (ADS)

    Naglič, Peter; Vidovič, Luka; Milanič, Matija; Randeberg, Lise L.; Majaron, Boris

    2014-03-01

    Light propagation in highly scattering biological tissues is often treated in the so-called diffusion approximation (DA). Although the analytical solutions derived within the DA are known to be inaccurate near tissue boundaries and absorbing layers, their use in quantitative analysis of diffuse reflectance spectra (DRS) is quite common. We analyze the artifacts in assessed tissue properties which occur in fitting of numerically simulated DRS with the DA solutions for a three-layer skin model. In addition, we introduce an original procedure which significantly improves the accuracy of such an inverse analysis of DRS. This procedure involves a single comparison run of a Monte Carlo (MC) numerical model, yet avoids the need to implement and run an inverse MC. This approach is tested also in analysis of experimental DRS from human skin.

  12. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    PubMed

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast.

  13. Metabolic flux distribution analysis by 13C-tracer experiments using the Markov chain-Monte Carlo method.

    PubMed

    Yang, J; Wongsa, S; Kadirkamanathan, V; Billings, S A; Wright, P C

    2005-12-01

    Metabolic flux analysis using 13C-tracer experiments is an important tool in metabolic engineering since intracellular fluxes are non-measurable quantities in vivo. Current metabolic flux analysis approaches are fully based on stoichiometric constraints and carbon atom balances, where the over-determined system is iteratively solved by a parameter estimation approach. However, the unavoidable measurement noises involved in the fractional enrichment data obtained by 13C-enrichment experiment and the possible existence of unknown pathways prevent a simple parameter estimation method for intracellular flux quantification. The MCMC (Markov chain-Monte Carlo) method, which obtains intracellular flux distributions through delicately constructed Markov chains, is shown to be an effective approach for deep understanding of the intracellular metabolic network. Its application is illustrated through the simulation of an example metabolic network.

  14. Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut

    2017-01-01

    We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .

  15. Comparative Analysis of Nuclear Cross Sections in Monte Carlo Methods for Medical Physics Applications

    SciTech Connect

    Myers, Chris; Kirk, Bernadette Lugue; Leal, Luiz C

    2007-01-01

    The data used in two Monte Carlo (MC) codes, EGSnrc and MCNPX were compared and a majority of the data used in MCNPX was imported into EGSnrc. The effects of merging the data of the two codes were then examined. MCNPX was run using the ITS electron step algorithm and the default data libraries mcplib04 and el03. Two runs are made with EGSnrc. The first simulation uses the default PEGS cross section library. The second simulation utilizes the data imported from MCNPX. All energy threshold values and physics options are made to be identical. A simple case was created in both EGSnrc and MCNPX that calculates the radial depth dose from an isotropically radiating disc in water for various incident, monoenergetic photon and electron energies. Initial results show that much less central processing unit (cpu) time is required by the EGSnrc code for simulations involving large numbers of particles, primarily electrons, when compared to MCNPX. The detailed particle history files - ptrac and iwatch - are investigated to compare the number and types of events being simulated in order to determine the reasons for the run time differences

  16. White light Fourier spectrometer: Monte Carlo noise analysis and test measurements

    NASA Astrophysics Data System (ADS)

    Stoykova, Elena; Ivanov, Branimir

    2007-06-01

    This work reports on investigation of the sensitivity of a Fourier-transform spectrometer to noise sources based on Monte-Carlo simulation of measurement of a single spectrum. Flexibility of this approach permits easily to imitate various noise contaminations of the interferograms and to obtain statistically reliable results for widely varying noise characteristics. More specifically, we evaluate the accuracy of restoration of a single absorption peak for the cases of an additive detection noise and the noise which adds a fluctuating component to the carrier frequency in the source and the measurement channel of the interferometer. Comparison of spectra of an etalon He-Ne source calculated from more than 200 measured interferograms with the true spectrum supports a hypothesis that the latter fluctuations have characteristics of a coloured noise. Taking into account that the signal-to-noise ratio in the Fourier spectroscopy is constantly increasing, we focus on limitations on the achievable accuracy of spectrum restoration that are set by this type of noise which modifies the shape of the recorded interferograms. We present also results of the test measurements of the spectrum of a laser diode chosen as a test source using a three-channel Fourier spectroscopic system based on a white-sourced Michelson interferometer realized with the Twyman-Green scheme. The obtained results exhibit that fluctuations in the current displacement of the movable mirror of the interferometer should remain below 20 nm to restore the absorption spectrum with acceptable accuracy, especially at higher frequency bandwidth of the fluctuations.

  17. Applications of Monte Carlo methods for the analysis of MHTGR case of the VHTRC benchmark

    SciTech Connect

    Difilippo, F.C.

    1994-03-01

    Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanese (VHTRC) and Swiss (PROTEUS) facilities and an actual configuration of the PROTEUS Configuration 1 experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. The author has used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degrees}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.

  18. Monte Carlo analysis on probe performance for endoscopic diffuse optical spectroscopy of tubular organ

    NASA Astrophysics Data System (ADS)

    Zhang, Yunyao; Zhu, Jingping; Cui, Weiwen; Nie, Wei; Li, Jie; Xu, Zhenghong

    2015-03-01

    We investigated the performance of endoscopic diffuse optical spectroscopy probes with circular or linear fiber arrangements for tubular organ cancer detection. Probe performance was measured by penetration depth. A Monte Carlo model was employed to simulate light transport in the hollow cylinder that both emits and receives light from the inner boundary of the sample. The influence of fiber configurations and tissue optical properties on penetration depth was simulated. The results show that under the same condition, probes with circular fiber arrangement penetrate deeper than probes with linear fiber arrangement, and the difference between the two probes' penetration depth decreases with an increase in the 'distance between source and detector (SD)' and the radius of the probe. Other results show that the penetration depths and their differences both decrease with an increase in the absorption coefficient and the reduced scattering coefficient but remain constant with changes in the anisotropy factor. Moreover, the penetration depth was more affected by the absorption coefficient than the reduced scattering coefficient. It turns out that in NIR band, probes with linear fiber arrangements are more appropriate for diagnosing superficial cancers, whereas probes with circular fiber arrangements should be chosen for diagnosing adenocarcinoma. But in UV-VIS band, the two probe configurations exhibit nearly the same. These results are useful in guiding endoscopic diffuse optical spectroscopy-based diagnosis for esophageal, cervical, colorectal and other cancers.

  19. Beam steering uncertainty analysis for Risley prisms based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen

    2017-01-01

    The Risley-prism system is applied in imaging LADAR to achieve precision directing of laser beams. The image quality of LADAR is affected deeply by the laser beam steering quality of Risley prisms. The ray-tracing method was used to predict the pointing error. The beam steering uncertainty of Risley prisms was investigated through Monte Carlo simulation under the effects of rotation axis jitter and prism rotation error. Case examples were given to elucidate the probability distribution of pointing error. Furthermore, the effect of scan pattern on the beam steering uncertainty was also studied. It is found that the demand for the bearing rotational accuracy of the second prism is much more stringent than that of the first prism. Under the effect of rotation axis jitter, the pointing uncertainty in the field of regard is related to the altitude angle of the emerging beam, but it has no relationship with the azimuth angle. The beam steering uncertainty will be affected by the original phase if the scan pattern is a circle. The proposed method can be used to estimate the beam steering uncertainty of Risley prisms, and the conclusions will be helpful in the design and manufacture of this system.

  20. Time series analysis and Monte Carlo methods for eigenvalue separation in neutron multiplication problems

    SciTech Connect

    Nease, Brian R. Ueki, Taro

    2009-12-10

    A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.

  1. Monte Carlo analysis of thermal transpiration effects in capacitance diaphragm gauges with helicoidal baffle system

    NASA Astrophysics Data System (ADS)

    Vargas, M.; Wüest, M.; Stefanov, S.

    2012-05-01

    The Capacitance Diaphragm Gauge (CDG) is one of the most widely used vacuum gauges in low and middle vacuum ranges. This device consists basically of a very thin ceramic or metal diaphragm which forms one of the electrodes of a cap acitor. The pressure is determined by measuring the variation in the capacitance due to the deflection of the diaphragm caused by the pressure difference established across the membrane. In order to minimize zero drift, some CDGs are operated keeping the sensor at a higher temperature. This difference in the temperature between the sensor and the vacuum chamber makes the behaviour of the gauge non-linear due to thermal transpiration effects. This effect becomes more significant when we move from the transitional flow to the free molecular regime. Besides, CDGs may incorporate different baffle systems to avoid the condensation on the membrane or its contamination. In this work, the thermal transpiration effect on the behaviour of a rarefied gas and on the measurements in a CDG with a helicoidal baffle system is investigated by using the Direct Simulation Monte Carlo method (DSMC). The study covers the behaviour of the system under the whole range of rarefaction, from the continuum up to the free molecular limit and the results are compared with empirical results. Moreover, the influence of the boundary conditions on the thermal transpiration effects is investigated by using Maxwell boundary conditions.

  2. Mapping-Linked Quantitative Trait Loci Using Bayesian Analysis and Markov Chain Monte Carlo Algorithms

    PubMed Central

    Uimari, P.; Hoeschele, I.

    1997-01-01

    A Bayesian method for mapping linked quantitative trait loci (QTL) using multiple linked genetic markers is presented. Parameter estimation and hypothesis testing was implemented via Markov chain Monte Carlo (MCMC) algorithms. Parameters included were allele frequencies and substitution effects for two biallelic QTL, map positions of the QTL and markers, allele frequencies of the markers, and polygenic and residual variances. Missing data were polygenic effects and multi-locus marker-QTL genotypes. Three different MCMC schemes for testing the presence of a single or two linked QTL on the chromosome were compared. The first approach includes a model indicator variable representing two unlinked QTL affecting the trait, one linked and one unlinked QTL, or both QTL linked with the markers. The second approach incorporates an indicator variable for each QTL into the model for phenotype, allowing or not allowing for a substitution effect of a QTL on phenotype, and the third approach is based on model determination by reversible jump MCMC. Methods were evaluated empirically by analyzing simulated granddaughter designs. All methods identified correctly a second, linked QTL and did not reject the one-QTL model when there was only a single QTL and no additional or an unlinked QTL. PMID:9178021

  3. Analysis of large solid propellant rocket engine exhaust plumes using the direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.

    1984-01-01

    A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).

  4. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    DOE PAGES

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less

  5. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    NASA Astrophysics Data System (ADS)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  6. Monte Carlo analysis for finite-temperature magnetism of Nd2Fe14B permanent magnet

    NASA Astrophysics Data System (ADS)

    Toga, Yuta; Matsumoto, Munehisa; Miyashita, Seiji; Akai, Hisazumi; Doi, Shotaro; Miyake, Takashi; Sakuma, Akimasa

    2016-11-01

    We investigate the effects of magnetic inhomogeneities and thermal fluctuations on the magnetic properties of a rare-earth intermetallic compound, Nd2Fe14B . The constrained Monte Carlo method is applied to a Nd2Fe14B bulk system to realize the experimentally observed spin reorientation and magnetic anisotropy constants KmA(m =1 ,2 ,4 ) at finite temperatures. Subsequently, it is found that the temperature dependence of K1A deviates from the Callen-Callen law, K1A(T ) ∝M (T) 3 , even above room temperature, TR˜300 K , when the Fe (Nd) anisotropy terms are removed to leave only the Nd (Fe) anisotropy terms. This is because the exchange couplings between Nd moments and Fe spins are much smaller than those between Fe spins. It is also found that the exponent n in the external magnetic field Hext response of barrier height FB=FB0(1-Hext/H0) n is less than 2 in the low-temperature region below TR, whereas n approaches 2 when T >TR , indicating the presence of Stoner-Wohlfarth-type magnetization rotation. This reflects the fact that the magnetic anisotropy is mainly governed by the K1A term in the T >TR region.

  7. Size and composition of membrane protein clusters predicted by Monte Carlo analysis.

    PubMed

    Goldman, Jacki; Andrews, Steven; Bray, Dennis

    2004-10-01

    Biological membranes contain a high density of protein molecules, many of which associate into two-dimensional microdomains with important physiological functions. We have used Monte Carlo simulations to examine the self-association of idealized protein species in two dimensions. The proteins have defined bond strengths and bond angles, allowing us to estimate the size and composition of the aggregates they produce at equilibrium. With a single species of protein, the extent of cluster formation and the sizes of individual clusters both increase in non-linear fashion, showing a "phase change" with protein concentration and bond strength. With multiple co-aggregating proteins, we find that the extent of cluster formation also depends on the relative proportions of participating species. For some lattice geometries, a stoichiometric excess of particular species depresses cluster formation and moreover distorts the composition of clusters that do form. Our results suggest that the self-assembly of microdomains might require a critical level of subunits and that for optimal co-aggregation, proteins should be present in the membrane in the correct stoichiometric ratios.

  8. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    SciTech Connect

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.

  9. First passage time Markov chain analysis of rare events for kinetic Monte Carlo: double kink nucleation during dislocation glide

    NASA Astrophysics Data System (ADS)

    Deo, C. S.; Srolovitz, D. J.

    2002-09-01

    We describe a first passage time Markov chain analysis of rare events in kinetic Monte Carlo (kMC) simulations and demonstrate how this analysis may be used to enhance kMC simulations of dislocation glide. Dislocation glide is described by the kink mechanism, which involves double kink nucleation, kink migration and kink-kink annihilation. Double kinks that nucleate on straight dislocations are unstable at small kink separations and tend to recombine immediately following nucleation. A very small fraction (<0.001) of nucleating double kinks survive to grow to a stable kink separation. The present approach replaces all of the events that lead up to the formation of a stable kink with a simple numerical calculation of the time required for stable kink formation. In this paper, we treat the double kink nucleation process as a temporally homogeneous birth-death Markov process and present a first passage time analysis of the Markov process in order to calculate the nucleation rate of a double kink with a stable kink separation. We discuss two methods to calculate the first passage time; one computes the distribution and the average of the first passage time, while the other uses a recursive relation to calculate the average first passage time. The average first passage times calculated by both approaches are shown to be in excellent agreement with direct Monte Carlo simulations for four idealized cases of double kink nucleation. Finally, we apply this approach to double kink nucleation on a screw dislocation in molybdenum and obtain the rates for formation of stable double kinks as a function of applied stress and temperature. Equivalent kMC simulations are too inefficient to be performed using commonly available computational resources.

  10. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    SciTech Connect

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh

    2016-09-16

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.

  11. A Monte Carlo study comparing PIV, ULS and DWLS in the estimation of dichotomous confirmatory factor analysis.

    PubMed

    Nestler, Steffen

    2013-02-01

    We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non-normality (normal, moderately, and extremely non-normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.

  12. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    NASA Astrophysics Data System (ADS)

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution

  13. PDF Weaving - Linking Inventory Data and Monte Carlo Uncertainty Analysis in the Study of how Disturbance Affects Forest Carbon Storage

    NASA Astrophysics Data System (ADS)

    Healey, S. P.; Patterson, P.; Garrard, C.

    2014-12-01

    Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon

  14. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    SciTech Connect

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run with little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.

  15. Monte Carlo analysis of single fiber reflectance spectroscopy: photon path length and sampling depth.

    PubMed

    Kanick, S C; Robinson, D J; Sterenborg, H J C M; Amelink, A

    2009-11-21

    Single fiber reflectance spectroscopy is a method to noninvasively quantitate tissue absorption and scattering properties. This study utilizes a Monte Carlo (MC) model to investigate the effect that optical properties have on the propagation of photons that are collected during the single fiber reflectance measurement. MC model estimates of the single fiber photon path length (L(SF)) show excellent agreement with experimental measurements and predictions of a mathematical model over a wide range of optical properties and fiber diameters. Simulation results show that L(SF) is unaffected by changes in anisotropy (g epsilon [0.8, 0.9, 0.95]), but is sensitive to changes in phase function (Henyey-Greenstein versus modified Henyey-Greenstein). A 20% decrease in L(SF) was observed for the modified Henyey-Greenstein compared with the Henyey-Greenstein phase function; an effect that is independent of optical properties and fiber diameter and is approximated with a simple linear offset. The MC model also returns depth-resolved absorption profiles that are used to estimate the mean sampling depth (Z(SF)) of the single fiber reflectance measurement. Simulated data are used to define a novel mathematical expression for Z(SF) that is expressed in terms of optical properties, fiber diameter and L(SF). The model of sampling depth indicates that the single fiber reflectance measurement is dominated by shallow scattering events, even for large fibers; a result that suggests that the utility of single fiber reflectance measurements of tissue in vivo will be in the quantification of the optical properties of superficial tissues.

  16. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    SciTech Connect

    PADOVANI, ENRICO

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.

  17. Fundamentals of Monte Carlo

    SciTech Connect

    Wollaber, Allan Benton

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  18. Monte Carlo eikonal scattering

    NASA Astrophysics Data System (ADS)

    Gibbs, W. R.; Dedonder, J. P.

    2012-08-01

    Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.

  19. Clinical implementation of the Peregrine Monte Carlo dose calculations system for photon beam therapy

    SciTech Connect

    Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D

    1999-07-01

    PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.

  20. Direct Simulation Monte Carlo Calculations in Support of the Columbia Shuttle Orbiter Accident Investigation

    NASA Technical Reports Server (NTRS)

    Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.

    2003-01-01

    The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.

  1. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  2. SU-E-T-644: QuAArC: A 3D VMAT QA System Based On Radiochromic Film and Monte Carlo Simulation of Log Files

    SciTech Connect

    Barbeiro, A.R.; Ureba, A.; Baeza, J.A.; Jimenez-Ortega, E.; Plaza, A. Leal; Linares, R.; Mateos, J.C.; Velazquez, S.

    2015-06-15

    Purpose: VMAT involves two main sources of uncertainty: one related to the dose calculation accuracy, and the other linked to the continuous delivery of a discrete calculation. The purpose of this work is to present QuAArC, an alternative VMAT QA system to control and potentially reduce these uncertainties. Methods: An automated MC simulation of log files, recorded during VMAT treatment plans delivery, was implemented in order to simulate the actual treatment parameters. The linac head models and the phase-space data of each Control Point (CP) were simulated using the EGSnrc/BEAMnrc MC code, and the corresponding dose calculation was carried out by means of BEAMDOSE, a DOSXYZnrc code modification. A cylindrical phantom was specifically designed to host films rolled up at different radial distances from the isocenter, for a 3D and continuous dosimetric verification. It also allows axial and/or coronal films and point measurements with several types of ion chambers at different locations. Specific software was developed in MATLAB in order to process and evaluate the dosimetric measurements, which incorporates the analysis of dose distributions, profiles, dose difference maps, and 2D/3D gamma index. It is also possible to obtain the experimental DVH reconstructed on the patient CT, by an optimization method to find the individual contribution corresponding to each CP on the film, taking into account the total measured dose, and the corresponding CP dose calculated by MC. Results: The QuAArC system showed high reproducibility of measurements, and consistency with the results obtained with the commercial system implemented in the verification of the evaluated treatment plans. Conclusion: A VMAT QA system based on MC simulation and high resolution dosimetry with film has been developed for treatment verification. It shows to be useful for the study of the real VMAT capabilities, and also for linac commissioning and evaluation of other verification devices.

  3. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve.

  4. Risk Assessment and Prediction of Flyrock Distance by Combined Multiple Regression Analysis and Monte Carlo Simulation of Quarry Blasting

    NASA Astrophysics Data System (ADS)

    Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh

    2016-09-01

    Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.

  5. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    PubMed

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  6. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation.

    PubMed

    Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent

    2015-09-14

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  7. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Messerly, Richard A.; Rowley, Richard L.; Knotts, Thomas A.; Wilding, W. Vincent

    2015-09-01

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  8. Using calibration constrained Monte Carlo analysis of alternative conceptual models in land use management of drained fens

    NASA Astrophysics Data System (ADS)

    Rossi, Pekka; Ala-aho, Pertti; Doherty, John; Kløve, Bjørn

    2013-04-01

    Quantification of groundwater model uncertainties is one of the key aspects when using models to direct land use or water management. An esker aquifer with a size of 90 km2 was studied to understand how the surrounding peatland forestry drainage, groundwater abstraction and climate variability can affect the aquifer groundwater level and the water levels of groundwater dependent lakes of the area. Aquifer was studied with steady state groundwater models using three alternative conceptual geological models of the esker and running calibration constrained Null Space Monte Carlo uncertainty analysis and linear analysis to each model. This kind of simulation approach has not been used in peatland management previously. Models and analyses were used to observe effects of different land use scenarios, e.g. peatland drainage restoration or water abstraction for a nearby city, and climate variability. Data from the models and analyses give the decision makers insight of how different management practices in peatlands can affect the groundwater system given the uncertainties arising from the geological understanding, hydrological measurements, and model conceptualization. Results from the models can be used, for example, to pinpoint restoration or conservation of specific peatland drainage areas in which the models suggest clearest connection to aquifer water level.

  9. Electronic structure quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bajdich, Michal; Mitas, Lubos

    2009-04-01

    Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with

  10. Is adding more indicators to a latent class analysis beneficial or detrimental? Results of a Monte-Carlo study.

    PubMed

    Wurpts, Ingrid C; Geiser, Christian

    2014-01-01

    The purpose of this study was to examine in which way adding more indicators or a covariate influences the performance of latent class analysis (LCA). We varied the sample size (100 ≤ N ≤ 2000), number, and quality of binary indicators (between 4 and 12 indicators with conditional response probabilities of [0.3, 0.7], [0.2, 0.8], or [0.1, 0.9]), and the strength of covariate effects (zero, small, medium, large) in a Monte Carlo simulation study of 2- and 3-class models. The results suggested that in general, a larger sample size, more indicators, a higher quality of indicators, and a larger covariate effect lead to more converged and proper replications, as well as fewer boundary parameter estimates and less parameter bias. Furthermore, interactions among these study factors demonstrated how using more or higher quality indicators, as well as larger covariate effect size, could sometimes compensate for small sample size. Including a covariate appeared to be generally beneficial, although the covariate parameters themselves showed relatively large bias. Our results provide useful information for practitioners designing an LCA study in terms of highlighting the factors that lead to better or worse performance of LCA.

  11. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners.

    PubMed

    Sweeney, Lisa M; Parker, Ann; Haber, Lynne T; Tran, C Lang; Kuempel, Eileen D

    2013-06-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model.

  12. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: The Ti4O7 Magneli phase

    DOE PAGES

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...

    2016-06-07

    The Magneli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation. Our resultsmore » confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less

  13. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: the Ti4O7 Magnéli phase

    DOE PAGES

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...

    2016-06-07

    The Magnéli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. In this paper, we have examined three low-lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate quantum Monte Carlo methods. We compare our results to those obtained from density functional theory-based methods that include approximate corrections for exchange and correlation. Ourmore » results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Finally, a detailed analysis suggests that non-local exchange–correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less

  14. Increased risk of orofacial clefts associated with maternal obesity: case–control study and Monte Carlo-based bias analysis

    PubMed Central

    Stott-Miller, Marni; Heike, Carrie L.; Kratz, Mario; Starr, Jacqueline R.

    2010-01-01

    Summary Our objective was to evaluate whether infants born to obese or diabetic women are at higher risk of non-syndromic orofacial clefting. We conducted a population-based case–control study using Washington State birth certificate and hospitalisation data for the years 1987–2005. Cases were infants born with orofacial clefts (n = 2153) and controls infants without orofacial clefts (n = 18 070). The primary exposures were maternal obesity (body mass index ≥30) and diabetes (either pre-existing or gestational). We estimated adjusted odds ratios (ORs) to compare, for mothers of cases and controls, the proportions of obese vs. normal-weight women and diabetic vs. non-diabetic women. We additionally performed Monte Carlo-based simulation analysis to explore possible influences of biases. Obese women had a small increased risk of isolated orofacial clefts in their offspring compared with normal-body mass index women [adjusted OR 1.26; 95% confidence interval 1.03, 1.55]. Results were similar regardless of type of cleft. Bias analyses suggest that estimates may represent underlying ORs of stronger magnitude. Results for diabetic women were highly imprecise and inconsistent. We and others have observed weak associations of similar magnitude between maternal obesity and risk of nonsyndromic orofacial clefts. These results could be due to bias or residual confounding. However, it is also possible that these results represent a stronger underlying association. More precise exposure measurement could help distinguish between these two possibilities. PMID:20670231

  15. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF).

    PubMed

    Hansson, Marie; Isaksson, Mats

    2007-04-07

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  16. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners

    PubMed Central

    Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.

    2015-01-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101

  17. Monte Carlo simulations of subsurface analysis of painted layers in micro-scale spatially offset Raman spectroscopy.

    PubMed

    Matousek, Pavel; Conti, Claudia; Colombo, Chiara; Realini, Marco

    2015-09-01

    A recently developed micrometer-scale spatially offset Raman spectroscopy (micro-SORS) method provides a new analytical capability for investigating nondestructively the chemical composition of subsurface, micrometer-scale-thick, diffusely scattering layers at depths beyond the reach of conventional confocal Raman microscopy. Here we provide, for the first time, the theoretical foundations for the micro-SORS defocusing concept based on Monte Carlo simulations. Specifically, we investigate a defocusing variant of micro-SORS that we used in our recent proof-of-concept study in conditions involving thin, diffusely scattering layers on top of an extended, diffusely scattering substrate. This configuration is pertinent, for example, for the subsurface analysis of painted layers in cultural heritage studies. The depth of the origin of Raman signal and the relative micro-SORS enhancement of the sublayer signals reached are studied as a function of layer thickness, sample photon transport length, and absorption. The model predicts that sublayer enhancement initially rapidly increases with increasing defocusing, ultimately reaching a plateau. The magnitude of the enhancement was found to be larger for thicker layers. The simulations also indicate that the penetration depths of micro-SORS can be between one and two orders of magnitude larger than those reached using conventional confocal Raman microscopy. The model provides a deeper insight into the underlying Raman photon migration mechanisms permitting the more effective optimization of experimental conditions for specific sample parameters.

  18. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    SciTech Connect

    Jewell, J. B.; O'Dwyer, I. J.; Huey, Greg; Gorski, K. M.; Eriksen, H. K.; Wandelt, B. D. E-mail: h.k.k.eriksen@astro.uio.no

    2009-05-20

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C {sub l}, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of {chi}{sup 2} between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  19. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  20. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence.

    PubMed

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html].

  1. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence

    PubMed Central

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The “missing heritability” has been suggested to be due to lack of studies focused on epistasis, also called gene–gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called “Epistasis Test in Meta-Analysis” (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene–gene interactions in the renin–angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html]. PMID:27045371

  2. Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model

    ERIC Educational Resources Information Center

    Owens, Corina M.

    2011-01-01

    Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…

  3. Parallel tempering Monte Carlo combined with clustering Euclidean metric analysis to study the thermodynamic stability of Lennard-Jones nanoclusters

    NASA Astrophysics Data System (ADS)

    Cezar, Henrique M.; Rondina, Gustavo G.; Da Silva, Juarez L. F.

    2017-02-01

    A basic requirement for an atom-level understanding of nanoclusters is the knowledge of their atomic structure. This understanding is incomplete if it does not take into account temperature effects, which play a crucial role in phase transitions and changes in the overall stability of the particles. Finite size particles present intricate potential energy surfaces, and rigorous descriptions of temperature effects are best achieved by exploiting extended ensemble algorithms, such as the Parallel Tempering Monte Carlo (PTMC). In this study, we employed the PTMC algorithm, implemented from scratch, to sample configurations of LJn (n =38 , 55, 98, 147) particles at a wide range of temperatures. The heat capacities and phase transitions obtained with our PTMC implementation are consistent with all the expected features for the LJ nanoclusters, e.g., solid to solid and solid to liquid. To identify the known phase transitions and assess the prevalence of various structural motifs available at different temperatures, we propose a combination of a Leader-like clustering algorithm based on a Euclidean metric with the PTMC sampling. This combined approach is further compared with the more computationally demanding bond order analysis, typically employed for this kind of problem. We show that the clustering technique yields the same results in most cases, with the advantage that it requires no previous knowledge of the parameters defining each geometry. Being simple to implement, we believe that this straightforward clustering approach is a valuable data analysis tool that can provide insights into the physics of finite size particles with few to thousand atoms at a relatively low cost.

  4. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    NASA Astrophysics Data System (ADS)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model

  5. A Monte Carlo Investigation of the Analysis of Variance Applied to Non-Independent Bernoulli Variates.

    ERIC Educational Resources Information Center

    Draper, John F., Jr.

    The applicability of the Analysis of Variance, ANOVA, procedures to the analysis of dichotomous repeated measure data is described. The design models for which data were simulated in this investigation were chosen to represent simple cases of two experimental situations: situation one, in which subjects' responses to a single randomly selected set…

  6. Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations

    SciTech Connect

    Liu, Xiang-Yang; Andersson, Anders David

    2016-11-07

    In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.

  7. Monte Carlo Criticality Analysis of Simple Geometrics COntaining Tungsten Rhenium Alloys Engrained with Uranium Dioxide and Uranium Mononitride

    SciTech Connect

    Jonathan A. Webb; Indrajit Charit

    2011-08-01

    The critical mass and dimensions of simple geometries containing highly enriched uraniumdioxide (UO2) and uraniummononitride (UN) encapsulated in tungsten-rhenium alloys are determined using MCNP5 criticality calculations. Spheres as well as cylinders with length to radius ratios of 1.82 are computationally built to consist of 60 vol.% fuel and 40 vol.% metal matrix. Within the geometries the uranium is enriched to 93 wt.% uranium-235 and the rhenium content within the metal alloy was modeled over a range of 0 to 30 at.%. The spheres containing UO2 were determined to have a critical radius of 18.29 cm to 19.11 cm and a critical mass ranging from 366 kg to 424 kg. The cylinders containing UO2 were found to have a critical radius ranging from 17.07 cm to 17.844 cm with a corresponding critical mass of 406 kg to 471 kg. Spheres engrained with UN were determined to have a critical radius ranging from 14.82 cm to 15.19 cm and a critical mass between 222 kg and 242 kg. Cylinders which were engrained with UN were determined to have a critical radius ranging from 13.811 cm to 14.155 cm with a corresponding critical mass of 245 kg to 267 kg. The critical geometries were also computationally submerged in a neutronaically infinite medium of fresh water to determine the effects of rhenium addition on criticality accidents due to water submersion. The monte carlo analysis demonstrated that rhenium addition of up to 30 at.% can reduce the excess reactivity due to water submersion by up to $5.07 for UO2 fueled cylinders, $3.87 for UO2 fueled spheres and approximately $3.00 for UN fueled spheres and cylinders.

  8. Water and tissue equivalence of a new PRESAGE{sup Registered-Sign} formulation for 3D proton beam dosimetry: A Monte Carlo study

    SciTech Connect

    Gorjiara, Tina; Kuncic, Zdenka; Doran, Simon; Adamovics, John; Baldock, Clive

    2012-11-15

    Purpose: To evaluate the water and tissue equivalence of a new PRESAGE{sup Registered-Sign} 3D dosimeter for proton therapy. Methods: The GEANT4 software toolkit was used to calculate and compare total dose delivered by a proton beam with mean energy 62 MeV in a PRESAGE{sup Registered-Sign} dosimeter, water, and soft tissue. The dose delivered by primary protons and secondary particles was calculated. Depth-dose profiles and isodose contours of deposited energy were compared for the materials of interest. Results: The proton beam range was found to be Almost-Equal-To 27 mm for PRESAGE{sup Registered-Sign }, 29.9 mm for soft tissue, and 30.5 mm for water. This can be attributed to the lower collisional stopping power of water compared to soft tissue and PRESAGE{sup Registered-Sign }. The difference between total dose delivered in PRESAGE{sup Registered-Sign} and total dose delivered in water or tissue is less than 2% across the entire water/tissue equivalent range of the proton beam. The largest difference between total dose in PRESAGE{sup Registered-Sign} and total dose in water is 1.4%, while for soft tissue it is 1.8%. In both cases, this occurs at the distal end of the beam. Nevertheless, the authors find that PRESAGE{sup Registered-Sign} dosimeter is overall more tissue-equivalent than water-equivalent before the Bragg peak. After the Bragg peak, the differences in the depth doses are found to be due to differences in primary proton energy deposition; PRESAGE{sup Registered-Sign} and soft tissue stop protons more rapidly than water. The dose delivered by secondary electrons in the PRESAGE{sup Registered-Sign} differs by less than 1% from that in soft tissue and water. The contribution of secondary particles to the total dose is less than 4% for electrons and Almost-Equal-To 1% for protons in all the materials of interest. Conclusions: These results demonstrate that the new PRESAGE{sup Registered-Sign} formula may be considered both a tissue- and water

  9. BOOTSTRAPPING AND MONTE CARLO METHODS OF POWER ANALYSIS USED TO ESTABLISH CONDITION CATEGORIES FOR BIOTIC INDICES

    EPA Science Inventory

    Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...

  10. Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.

    2006-01-01

    A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.

  11. Factor Analysis with Ordinal Indicators: A Monte Carlo Study Comparing DWLS and ULS Estimation

    ERIC Educational Resources Information Center

    Forero, Carlos G.; Maydeu-Olivares, Alberto; Gallardo-Pujol, David

    2009-01-01

    Factor analysis models with ordinal indicators are often estimated using a 3-stage procedure where the last stage involves obtaining parameter estimates by least squares from the sample polychoric correlations. A simulation study involving 324 conditions (1,000 replications per condition) was performed to compare the performance of diagonally…

  12. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  13. Monte Carlo Analysis of Airport Throughput and Traffic Delays Using Self Separation Procedures

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Sturdy, James L.

    2006-01-01

    This paper presents the results of three simulation studies of throughput and delay times of arrival and departure operations performed at non-towered, non-radar airports using self-separation procedures. The studies were conducted as part of the validation process of the Small Aircraft Transportation Systems Higher Volume Operations (SATS HVO) concept and include an analysis of the predicted airport capacity using with different traffic conditions and system constraints under increasing levels of demand. Results show that SATS HVO procedures can dramatically increase capacity at non-towered, non-radar airports and that the concept offers the potential for increasing capacity of the overall air transportation system.

  14. Monte Carlo simulation for correlation analysis of average glandular dose by breast thickness and glandular ratio in breast tissue.

    PubMed

    Kim, Sang-Tae; Cho, Jung-Keun

    2014-01-01

    A glandular breast tissue is a radio-sensitive tissue. So during the evaluation of an X-ray mammography device, Average Glandular Dose (AGD) measurement is a very important part. In reality, it is difficult to measure AGD directly, Monte Carlo simulation was used to analyze the correlation between the AGD and breast thickness. As a result, AGDs calculated through the Monte Carlo simulation were 1.64, 1.41 and 0.88 mGy. The simulated AGDs mainly depend on the glandular ratio of the breast. With the increase of glandular breast tissue, absorption of low photon-energy increased so that the AGDs increased, too. In addition, the thicker the breast was, the more the AGD became. Consequently, this study will be used as basic data for establishing the diagnostic reference levels of mammography.

  15. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  16. Wormhole Hamiltonian Monte Carlo.

    PubMed

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2014-07-31

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.

  17. Acoustic effects analysis utilizing speckle pattern with fixed-particle Monte Carlo

    NASA Astrophysics Data System (ADS)

    Vakili, Ali; Hollmann, Joseph A.; Holt, R. Glynn; DiMarzio, Charles A.

    2016-03-01

    Optical imaging in a turbid medium is limited because of multiple scattering a photon undergoes while traveling through the medium. Therefore, optical imaging is unable to provide high resolution information deep in the medium. In the case of soft tissue, acoustic waves unlike light, can travel through the medium with negligible scattering. However, acoustic waves cannot provide medically relevant contrast as good as light. Hybrid solutions have been applied to use the benefits of both imaging methods. A focused acoustic wave generates a force inside an acoustically absorbing medium known as acoustic radiation force (ARF). ARF induces particle displacement within the medium. The amount of displacement is a function of mechanical properties of the medium and the applied force. To monitor the displacement induced by the ARF, speckle pattern analysis can be used. The speckle pattern is the result of interfering optical waves with different phases. As light travels through the medium, it undergoes several scattering events. Hence, it generates different scattering paths which depends on the location of the particles. Light waves that travel along these paths have different phases (different optical path lengths). ARF induces displacement to scatterers within the acoustic focal volume, and changes the optical path length. In addition, temperature rise due to conversion of absorbed acoustic energy to heat, changes the index of refraction and therefore, changes the optical path length of the scattering paths. The result is a change in the speckle pattern. Results suggest that the average change in the speckle pattern measures the displacement of particles and temperature rise within the acoustic wave focal area, hence can provide mechanical and thermal properties of the medium.

  18. Isotropic Monte Carlo Grain Growth

    SciTech Connect

    Mason, J.

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  19. SU-E-T-761: TOMOMC, A Monte Carlo-Based Planning VerificationTool for Helical Tomotherapy

    SciTech Connect

    Chibani, O; Ma, C

    2015-06-15

    Purpose: Present a new Monte Carlo code (TOMOMC) to calculate 3D dose distributions for patients undergoing helical tomotherapy treatments. TOMOMC performs CT-based dose calculations using the actual dynamic variables of the machine (couch motion, gantry rotation, and MLC sequences). Methods: TOMOMC is based on the GEPTS (Gama Electron and Positron Transport System) general-purpose Monte Carlo system (Chibani and Li, Med. Phys. 29, 2002, 835). First, beam models for the Hi-Art Tomotherpy machine were developed for the different beam widths (1, 2.5 and 5 cm). The beam model accounts for the exact geometry and composition of the different components of the linac head (target, primary collimator, jaws and MLCs). The beams models were benchmarked by comparing calculated Pdds and lateral/transversal dose profiles with ionization chamber measurements in water. See figures 1–3. The MLC model was tuned in such a way that tongue and groove effect, inter-leaf and intra-leaf transmission are modeled correctly. See figure 4. Results: By simulating the exact patient anatomy and the actual treatment delivery conditions (couch motion, gantry rotation and MLC sinogram), TOMOMC is able to calculate the 3D patient dose distribution which is in principal more accurate than the one from the treatment planning system (TPS) since it relies on the Monte Carlo method (gold standard). Dose volume parameters based on the Monte Carlo dose distribution can also be compared to those produced by the TPS. Attached figures show isodose lines for a H&N patient calculated by TOMOMC (transverse and sagittal views). Analysis of differences between TOMOMC and TPS is an ongoing work for different anatomic sites. Conclusion: A new Monte Carlo code (TOMOMC) was developed for Tomotherapy patient-specific QA. The next step in this project is implementing GPU computing to speed up Monte Carlo simulation and make Monte Carlo-based treatment verification a practical solution.

  20. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    SciTech Connect

    Goluoglu, Sedat; Bekar, Kursat B; Wiarda, Dorothea

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  1. Behavioral Analysis of Visitors to a Medical Institution’s Website Using Markov Chain Monte Carlo Methods

    PubMed Central

    Tani, Yuji

    2016-01-01

    Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the

  2. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  3. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  4. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  5. Monte Carlo simulation for the transport beamline

    NASA Astrophysics Data System (ADS)

    Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.

    2013-07-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  6. Predictive uncertainty analysis of a highly heterogeneous field-scale groundwater model using null-space Monte Carlo

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2011-12-01

    Quantification of prediction uncertainty resulting from estimated parameters is critical to provide accurate predictive models for field-scale groundwater flow and transport problems. We examine and compare two approaches to defining predictive uncertainty where both approaches utilize pilot points to parameterize spatially heterogeneous fields. The first approach is the independent calibration of multiple initial "seed" fields created through geostatistical simulation and conditioned to observation data, resulting in an ensemble of calibrated property fields that defines uncertainty in the calibrated parameters. The second approach is the null-space Monte Carlo (NSMC) method that employs a decomposition of the Jacobian matrix from a single calibration to define a minimum number of linear combinations of parameters that account for the majority of the sensitivity of the overall calibration to the observed data. Random vectors are applied to the remaining linear combinations of parameters, the null space, to create an ensemble of fields, each of which remains calibrated to the data. We compare these two approaches using a highly-parameterized groundwater model of the Culebra dolomite in southeastern New Mexico. Observation data include two decades of steady-state head measurements and pumping test results. The predictive performance measure is advective travel time from a point to a prescribed boundary. Calibrated parameters at a set of pilot points include transmissivity, the horizontal hydraulic anisotropy, the storativity, and a section of recharge (> 1200 parameters in total). First, we calibrate 200 multiple random seed fields generated through geostatistical simulation conditioned to observation data. The 11 fields that contain the best and worst scenarios in terms of calibration and travel time analysis among the best 100 calibrated results provide a basis for the NSMC method. The NSMC method is used to generate 200 calibration-constrained parameter fields

  7. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  8. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  9. Probabilistic uncertainty analysis based on Monte Carlo simulations of co-combustion of hazelnut hull and coal blends: Data-driven modeling and response surface optimization.

    PubMed

    Buyukada, Musa

    2017-02-01

    The aim of present study is to investigate the thermogravimetric behaviour of the co-combustion of hazelnut hull (HH) and coal blends using three approaches: multi non-linear regression (MNLR) modeling based on Box-Behnken design (BBD) (1), optimization based on response surface methodology (RSM) (2), and probabilistic uncertainty analysis based on Monte Carlo simulation as a function of blend ratio, heating rate, and temperature (3). The response variable was predicted by the best-fit MNLR model with a predicted regression coefficient (R(2)pred) of 99.5%. Blend ratio of 90/10 (HH to coal, %wt), temperature of 405°C, and heating rate of 44°Cmin(-1) were determined as RSM-optimized conditions with a mass loss of 87.4%. The validation experiments with three replications were performed for justifying the predicted-mass loss percentage and 87.5%±0.2 of mass loss were obtained under RSM-optimized conditions. The probabilistic uncertainty analysis were performed by using Monte Carlo simulations.

  10. Facing Challenges for Monte Carlo Analysis of Full PWR Cores : Towards Optimal Detail Level for Coupled Neutronics and Proper Diffusion Data for Nodal Kinetics

    NASA Astrophysics Data System (ADS)

    Nuttin, A.; Capellan, N.; David, S.; Doligez, X.; El Mhari, C.; Méplan, O.

    2014-06-01

    Safety analysis of innovative reactor designs requires three dimensional modeling to ensure a sufficiently realistic description, starting from steady state. Actual Monte Carlo (MC) neutron transport codes are suitable candidates to simulate large complex geometries, with eventual innovative fuel. But if local values such as power densities over small regions are needed, reliable results get more difficult to obtain within an acceptable computation time. In this scope, NEA has proposed a performance test of full PWR core calculations based on Monte Carlo neutron transport, which we have used to define an optimal detail level for convergence of steady state coupled neutronics. Coupling between MCNP for neutronics and the subchannel code COBRA for thermal-hydraulics has been performed using the C++ tool MURE, developed for about ten years at LPSC and IPNO. In parallel with this study and within the same MURE framework, a simplified code of nodal kinetics based on two-group and few-point diffusion equations has been developed and validated on a typical CANDU LOCA. Methods for the computation of necessary diffusion data have been defined and applied to NU (Nat. U) and Th fuel CANDU after assembly evolutions by MURE. Simplicity of CANDU LOCA model has made possible a comparison of these two fuel behaviours during such a transient.

  11. Analysis of dense-medium light scattering with applications to corneal tissue: experiments and Monte Carlo simulations.

    PubMed

    Kim, K B; Shanyfelt, L M; Hahn, D W

    2006-01-01

    Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.

  12. A Monte Carlo based lookup table for spectrum analysis of turbid media in the reflectance probe regime

    SciTech Connect

    Xiang Wen; Xiewei Zhong; Tingting Yu; Dan Zhu

    2014-07-31

    Fibre-optic diffuse reflectance spectroscopy offers a method for characterising phantoms of biotissue with specified optical properties. For a commercial reflectance probe (six source fibres surrounding a central collection fibre with an inter-fibre spacing of 480 μm; R400-7, Ocean Optics, USA) we have constructed a Monte Carlo based lookup table to create a function called getR(μ{sub a}, μ'{sub s}), where μ{sub a} is the absorption coefficient and μ'{sub s} is the reduced scattering coefficient. Experimental measurements of reflectance from homogeneous calibrated phantoms with given optical properties are compared with the predicted reflectance from the lookup table. The deviation between experiment and prediction is on average 12.1%. (laser biophotonics)

  13. Evaluating amikacin dosage regimens in intensive care unit patients: a pharmacokinetic/pharmacodynamic analysis using Monte Carlo simulation.

    PubMed

    Zazo, Hinojal; Martín-Suárez, Ana; Lanao, José M

    2013-08-01

    The objectives of this study were to conduct a comparative pharmacokinetic/pharmacodynamic (PK/PD) evaluation using Monte Carlo simulation of conventional versus high-dose extended-interval dosage (HDED) regimens of amikacin (AMK) in intensive care unit (ICU) patients for an Acinetobacter baumannii infection model. The simulation was performed in five populations (a control population and four subpopulations of ICU patients). Using a specific AMK PK/PD model and Monte Carlo simulation, the following were generated: simulated AMK steady-state plasma level curves; PK/PD efficacy indexes [time during which the serum drug concentration remains above the minimum inhibitory concentration (MIC) for a dosing period (%T>MIC) and ratio of peak serum concentration to MIC (Cmax/MIC)]; evolution of bacterial growth curves; and adaptive resistance to treatment. A higher probability of bacterial resistance was observed with the HDED regimen compared with the conventional dosage regimen. A statistically significant increase in Cmax/MIC and a statistically significant reduction in %T>MIC with the HDED regimen were obtained. A multiple linear relationship between CFU values at 24h with Cmax/MIC and %T>MIC was obtained. In conclusion, with the infection model tested, the likelihood of resistance to treatment may be higher against pathogens with a high MIC with the HDED regimen, considering that in many ICU patients the %T>MIC may be limited. If a sufficient value of %T>MIC (≥60%) is not reached, even though the Cmax/MIC is high, the therapeutic efficacy of the treatment may not be guaranteed. This study indicates that different AMK dosing strategies could directly influence the efficacy results in ICU patients.

  14. Multilevel sequential Monte Carlo samplers

    SciTech Connect

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.

  15. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  16. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  17. Suitable Candidates for Monte Carlo Solutions.

    ERIC Educational Resources Information Center

    Lewis, Jerome L.

    1998-01-01

    Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)

  18. A Classroom Note on Monte Carlo Integration.

    ERIC Educational Resources Information Center

    Kolpas, Sid

    1998-01-01

    The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)

  19. Applications of Monte Carlo Methods in Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  20. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  1. Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study

    NASA Astrophysics Data System (ADS)

    Gear, J. I.; Charles-Edwards, E.; Partridge, M.; Flux, G. D.

    2011-11-01

    This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer

  2. Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study.

    PubMed

    Gear, J I; Charles-Edwards, E; Partridge, M; Flux, G D

    2011-11-21

    This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer

  3. Monte Carlo docking with ubiquitin.

    PubMed Central

    Cummings, M. D.; Hart, T. N.; Read, R. J.

    1995-01-01

    The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as

  4. Monte Carlo Simulation of Plumes Spectral Emission

    DTIC Science & Technology

    2005-06-07

    Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer

  5. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.

  6. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) in the recent CMB data

    SciTech Connect

    Kim, Jaiseung

    2011-04-01

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.

  7. KULLBACK-LEIBLER MARKOV CHAIN MONTE CARLO — A NEW ALGORITHM FOR FINITE MIXTURE ANALYSIS AND ITS APPLICATION TO GENE EXPRESSION DATA

    PubMed Central

    TATARINOVA, TATIANA; BOUCK, JOHN; SCHUMITZKY, ALAN

    2009-01-01

    In this paper, we study Bayesian analysis of nonlinear hierarchical mixture models with a finite but unknown number of components. Our approach is based on Markov chain Monte Carlo (MCMC) methods. One of the applications of our method is directed to the clustering problem in gene expression analysis. From a mathematical and statistical point of view, we discuss the following topics: theoretical and practical convergence problems of the MCMC method; determination of the number of components in the mixture; and computational problems associated with likelihood calculations. In the existing literature, these problems have mainly been addressed in the linear case. One of the main contributions of this paper is developing a method for the nonlinear case. Our approach is based on a combination of methods including Gibbs sampling, random permutation sampling, birth-death MCMC, and Kullback-Leibler distance. PMID:18763739

  8. Kullback-Leibler Markov chain Monte Carlo--a new algorithm for finite mixture analysis and its application to gene expression data.

    PubMed

    Tatarinova, Tatiana; Bouck, John; Schumitzky, Alan

    2008-08-01

    In this paper, we study Bayesian analysis of nonlinear hierarchical mixture models with a finite but unknown number of components. Our approach is based on Markov chain Monte Carlo (MCMC) methods. One of the applications of our method is directed to the clustering problem in gene expression analysis. From a mathematical and statistical point of view, we discuss the following topics: theoretical and practical convergence problems of the MCMC method; determination of the number of components in the mixture; and computational problems associated with likelihood calculations. In the existing literature, these problems have mainly been addressed in the linear case. One of the main contributions of this paper is developing a method for the nonlinear case. Our approach is based on a combination of methods including Gibbs sampling, random permutation sampling, birth-death MCMC, and Kullback-Leibler distance.

  9. Markov Chain Monte Carlo approaches to analysis of genetic and environmental components of human developmental change and G x E interaction.

    PubMed

    Eaves, Lindon; Erkanli, Alaattin

    2003-05-01

    The linear structural model has provided the statistical backbone of the analysis of twin and family data for 25 years. A new generation of questions cannot easily be forced into the framework of current approaches to modeling and data analysis because they involve nonlinear processes. Maximizing the likelihood with respect to parameters of such nonlinear models is often cumbersome and does not yield easily to current numerical methods. The application of Markov Chain Monte Carlo (MCMC) methods to modeling the nonlinear effects of genes and environment in MZ and DZ twins is outlined. Nonlinear developmental change and genotype x environment interaction in the presence of genotype-environment correlation are explored in simulated twin data. The MCMC method recovers the simulated parameters and provides estimates of error and latent (missing) trait values. Possible limitations of MCMC methods are discussed. Further studies are necessary explore the value of an approach that could extend the horizons of research in developmental genetic epidemiology.

  10. A joint Monte Carlo analysis of seafloor compliance, Rayleigh wave dispersion and receiver functions at ocean bottom seismic stations offshore New Zealand

    NASA Astrophysics Data System (ADS)

    Ball, Justin S.; Sheehan, Anne F.; Stachnik, Joshua C.; Lin, Fan-Chi; Collins, John A.

    2014-12-01

    body-wave imaging techniques such as receiver function analysis can be notoriously difficult to employ on ocean-bottom seismic data due largely to multiple reverberations within the water and low-velocity sediments. In lieu of suppressing this coherently scattered noise in ocean-bottom receiver functions, these site effects can be modeled in conjunction with shear velocity information from seafloor compliance and surface wave dispersion measurements to discern crustal structure. A novel technique to estimate 1-D crustal shear-velocity profiles from these data using Monte Carlo sampling is presented here. We find that seafloor compliance inversions and P-S conversions observed in the receiver functions provide complimentary constraints on sediment velocity and thickness. Incoherent noise in receiver functions from the MOANA ocean bottom seismic experiment limit the accuracy of the practical analysis at crustal scales, but synthetic recovery tests and comparison with independent unconstrained nonlinear optimization results affirm the utility of this technique in principle.

  11. A kinetic Monte Carlo approach for the analysis of trapping effect on the defect accumulation in neutron-irradiated Fe

    NASA Astrophysics Data System (ADS)

    Lee, Gyeong-Geun; Kwon, Junhyun; Kim, Duk Su

    2009-09-01

    The trapping effect of self-interstitial atom (SIA) clusters in neuron-irradiated Fe was analyzed in terms of generic traps. The effect of the cut-off size between sessile and glissile SIA clusters was investigated. The accumulation of SIA clusters decreased drastically as the cut-off size increased, which originated from the elimination of the SIA clusters at a grain boundary through its one-dimensional motion. When the immobile generic traps were introduced to the kinetic Monte Carlo simulation model, the effect of trap parameters was assessed. An increase in the binding energy between the trap and SIA-species resulted in a decrease in the number of mono-SIAs that were dissociated from the trap and a corresponding delay in visible SIA clusters. The size-dependent prefactor for the dissociation rate of trapped SIA clusters was necessary for a realistic accumulation behavior of SIA clusters. The trap density affects the density and size of the accumulated SIA cluster density during irradiation. This parameterization of generic traps provided insight into the mechanism of accumulation of SIA and SIA cluster.

  12. Terminating observation within matched pairs of subjects in a matched cohort analysis: a Monte Carlo simulation study.

    PubMed

    Sutradhar, Rinku; Baxter, Nancy N; Austin, Peter C

    2016-01-30

    Matched cohort analyses are becoming increasingly popular for estimating treatment effects in observational studies. However, in the applied biomedical literature, analysts and authors are inconsistent regarding whether to terminate follow-up among members of a matched set once one member is no longer under observation. This paper focused on time-to-event outcomes and used Monte Carlo simulation methods to determine the optimal approach. We found that the bias of the estimated treatment effect estimate was negligible under both approaches and that the percentage of censoring had no discernible effect on the magnitude of bias. The mean model-based standard error of the treatment estimate was consistently higher when we terminated observation within matched pairs. Furthermore, the type 1 error rate was consistently lower when we did not terminate follow-up within matched pairs. In conclusion, when the focus was on time-to-event outcomes, we demonstrated that there was no advantage to terminating follow-up within matched pairs. Continuing follow-up on each subject until their observation was naturally complete was superior compared with terminating a subject's observation time once its matched pair had ceased to be under observation. Given the frequency with which these analyses are conducted in the applied literature, our results provide important guidance to analysts and applied researchers as to the preferred analytic approach.

  13. A Monte Carlo Analysis of Weight Data from UF6 Cylinder Feed and Withdrawal Stations

    SciTech Connect

    Garner, James R; Whitaker, J Michael

    2015-01-01

    As the number of nuclear facilities handling uranium hexafluoride (UF6) cylinders (e.g., UF6 production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge National Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.

  14. Analysis of light incident location and detector position in early diagnosis of knee osteoarthritis by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Chen, Yanping; Chen, Yisha; Yan, Huangping; Wang, Xiaoling

    2017-01-01

    Early detection of knee osteoarthritis (KOA) is meaningful to delay or prevent the onset of osteoarthritis. In consideration of structural complexity of knee joint, position of light incidence and detector appears to be extremely important in optical inspection. In this paper, the propagation of 780-nm near infrared photons in three-dimensional knee joint model is simulated by Monte Carlo (MC) method. Six light incident locations are chosen in total to analyze the influence of incident and detecting location on the number of detected signal photons and signal to noise ratio (SNR). Firstly, a three-dimensional photon propagation model of knee joint is reconstructed based on CT images. Then, MC simulation is performed to study the propagation of photons in three-dimensional knee joint model. Photons which finally migrate out of knee joint surface are numerically analyzed. By analyzing the number of signal photons and SNR from the six given incident locations, the optimal incident and detecting location is defined. Finally, a series of phantom experiments are conducted to verify the simulation results. According to the simulation and phantom experiments results, the best incident location is near the right side of meniscus at the rear end of left knee joint and the detector is supposed to be set near patella, correspondingly.

  15. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  16. A Monte Carlo Sensitivity Analysis of CF2 and CF Radical Densities in a c-C4F8 Plasma

    NASA Technical Reports Server (NTRS)

    Bose, Deepak; Rauf, Shahid; Hash, D. B.; Govindan, T. R.; Meyyappan, M.

    2004-01-01

    A Monte Carlo sensitivity analysis is used to build a plasma chemistry model for octacyclofluorobutane (c-C4F8) which is commonly used in dielectric etch. Experimental data are used both quantitatively and quantitatively to analyze the gas phase and gas surface reactions for neutral radical chemistry. The sensitivity data of the resulting model identifies a few critical gas phase and surface aided reactions that account for most of the uncertainty in the CF2 and CF radical densities. Electron impact dissociation of small radicals (CF2 and CF) and their surface recombination reactions are found to be the rate-limiting steps in the neutral radical chemistry. The relative rates for these electron impact dissociation and surface recombination reactions are also suggested. The resulting mechanism is able to explain the measurements of CF2 and CF densities available in the literature and also their hollow spatial density profiles.

  17. STS-1 operational flight profile. Volume 5: Descent cycle 3. Appendix D: GRTLS six degree of freedom Monte Carlo dispersion analysis

    NASA Technical Reports Server (NTRS)

    Montez, M. N.

    1980-01-01

    The results of a six degree of freedom (6-DOF) nonlinear Monte Carlo dispersion analysis for the latest glide return to landing site (GRTLS) abort trajectory for the Space Transportation System 1 Flight are presented. For this GRTLS, the number two main engine fails at 262.5 seconds ground elapsed time. Fifty randomly selected simulations, initialized at external tank separation, are analyzed. The initial covariance matrix is a 20 x 20 matrix and includes navigation errors and dispersions in position and velocity, time, accelerometer bias, and inertial platform misalinements. In all 50 samples, speedbrake, rudder, elevon, and body flap hinge moments are acceptable. Transitions to autoland begin before 9,000 feet and there are no tailscrapes. Navigation derived dynamic pressure accuracies exceed the flight control system constraints above Mach 2.5. Three out of 50 landings exceeded tire specification limit speed of 222 knots. Pilot manual landings are expected to reduce landing speed by landing farther downrange.

  18. Evaluation of the Flow-Dialysis Technique for Analysis of Protein-Ligand Interactions: An Experimental and a Monte Carlo Study

    PubMed Central

    Veldhuis, Gertjan; Vos, Erwin P. P.; Broos, Jaap; Poolman, Bert; Scheek, Ruud M.

    2004-01-01

    Flow dialysis has found widespread use in determining the dissociation constant (KD) of a protein-ligand interaction or the amount of available binding sites (E0). This method has the potency to measure both these parameters in a single experiment and in this article a method to measure simultaneously the KD and E0 is presented, together with an extensive error analysis of the method. The flow-dialysis technique is experimentally simple to perform. However, a number of practical aspects of this method can have a large impact on the outcome of KD and E0. We have investigated all sources of significant systematic and random errors, using the interaction between mannitol and its transporter from Escherichia coli as a model. Monte Carlo simulations were found to be an excellent tool to assess the impact of these errors on the binding parameters and to define the experimental conditions that allow their most accurate estimation. PMID:15041640

  19. Monte Carlo simulations of kagome lattices with magnetic dipolar interactions

    NASA Astrophysics Data System (ADS)

    Plumer, Martin; Holden, Mark; Way, Andrew; Saika-Voivod, Ivan; Southern, Byron

    Monte Carlo simulations of classical spins on the two-dimensional kagome lattice with only dipolar interactions are presented. In addition to revealing the sixfold-degenerate ground state, the nature of the finite-temperature phase transition to long-range magnetic order is discussed. Low-temperature states consisting of mixtures of degenerate ground-state configurations separated by domain walls can be explained as a result of competing exchange-like and shape-anisotropy-like terms in the dipolar coupling. Fluctuations between pairs of degenerate spin configurations are found to persist well into the ordered state as the temperature is lowered until locking in to a low-energy state. Results suggest that the system undergoes a continuous phase transition at T ~ 0 . 43 in agreement with previous MC simulations but the nature of the ordering process differs. Preliminary results which extend this analysis to the 3D fcc ABC-stacked kagome systems will be presented.

  20. Three dimensional Monte-Carlo modeling of laser-tissue interaction

    SciTech Connect

    Gentile, N A; Kim, B M; London, R A; Trauner, K B

    1999-03-12

    A full three dimensional Monte-Carlo program was developed for analysis of the laser-tissue interactions. This project was performed as a part of the LATIS3D (3-D Laser-Tissue interaction) project. The accuracy was verified against results from a public domain two dimensional axisymmetric program. The code was used for simulation of light transport in simplified human knee geometry. Using the real human knee meshes which will be extracted from MRI images in the near future, a full analysis of dosimetry and surgical strategies for photodynamic therapy of rheumatoid arthritis will be followed.

  1. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  2. Converging Stereotactic Radiotherapy Using Kilovoltage X-Rays: Experimental Irradiation of Normal Rabbit Lung and Dose-Volume Analysis With Monte Carlo Simulation

    SciTech Connect

    Kawase, Takatsugu; Kunieda, Etsuo Deloar, Hossain M.; Tsunoo, Takanori; Seki, Satoshi; Oku, Yohei; Saitoh, Hidetoshi; Saito, Kimiaki; Ogawa, Eileen N.; Ishizaka, Akitoshi; Kameyama, Kaori; Kubo, Atsushi

    2009-10-01

    Purpose: To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. Methods and Materials: A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. Results: A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. Conclusions: A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.

  3. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    SciTech Connect

    Brown, Forrest B.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  4. Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations

    SciTech Connect

    Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C

    2011-01-01

    It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.

  5. Geometric Monte Carlo and black Janus geometries

    NASA Astrophysics Data System (ADS)

    Bak, Dongsu; Kim, Chanju; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil

    2017-04-01

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  6. Semistochastic Projector Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.

    2012-12-01

    We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.

  7. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

    NASA Astrophysics Data System (ADS)

    Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

    2014-06-01

    The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear

  8. Geometrical Monte Carlo simulation of atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Yuksel, Demet; Yuksel, Heba

    2013-09-01

    Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.

  9. CosmoMC: Cosmological MonteCarlo

    NASA Astrophysics Data System (ADS)

    Lewis, Antony; Bridle, Sarah

    2011-06-01

    We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

  10. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  11. Single scatter electron Monte Carlo

    SciTech Connect

    Svatos, M.M.

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  12. Four decades of implicit Monte Carlo

    SciTech Connect

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate forms of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.

  13. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  14. Development of a Space Radiation Monte Carlo Computer Simulation

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence S.

    1997-01-01

    The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.

  15. Neutron Fluence, Dosimetry and Damage Response Determination in In-Core/Ex-Core Components of the VENUS CEN/SCK LWR Using 3-D Monte Carlo Simulations: NEA's VENUS-3 Benchmark

    SciTech Connect

    Perlado, J. Manuel; Marian, Jaime; Sanz, Jesus Garcia

    2000-03-15

    Validating state-of-the-art methods used to predict fluence exposure to reactor pressure vessels (RPVs) has become an important issue in identifying the sources of uncertainty in the estimated RPV fluence for pressurized water reactors. This is a very important aspect in evaluating irradiation damage leading to the hardening and embrittlement of such structural components. One of the major benchmark experiments carried out to test three-dimensional methodologies is the VENUS-3 Benchmark Experiment in which three-dimensional Monte Carlo and S{sub n} codes have proved more efficient than synthesis methods. At the Instituto de Fusion Nuclear (DENIM) at the Universidad Politecnica de Madrid, a detailed full three-dimensional model of the Venus Critical Facility has been developed making use of the Monte Carlo transport code MCNP4B. The problem geometry and source modeling are described, and results, including calculated versus experimental (C/E) ratios as well as additional studies, are presented. Evidence was found that the great majority of C/E values fell within the 10% tolerance and most within 5%. Tolerance limits are discussed on the basis of evaluated data library and fission spectra sensitivity, where a value ranging between 10 to 15% should be accepted. Also, a calculation of the atomic displacement rate has been carried out in various locations throughout the reactor, finding that values of 0.0001 displacements per atom in external components such as the core barrel are representative of this type of reactor during a 30-yr time span.

  16. A Monte Carlo investigation of the Hamiltonian mean field model

    NASA Astrophysics Data System (ADS)

    Pluchino, Alessandro; Andronico, Giuseppe; Rapisarda, Andrea

    2005-04-01

    We present a Monte Carlo numerical investigation of the Hamiltonian mean field (HMF) model. We begin by discussing canonical Metropolis Monte Carlo calculations, in order to check the caloric curve of the HMF model and study finite size effects. In the second part of the paper, we present numerical simulations obtained by means of a modified Monte Carlo procedure with the aim to test the stability of those states at minimum temperature and zero magnetization (homogeneous Quasi stationary states), which exist in the condensed phase of the model just below the critical point. For energy densities smaller than the limiting value U∼0.68, we find that these states are unstable confirming a recent result on the Vlasov stability analysis applied to the HMF model.

  17. A Monte Carlo Library Least Square approach in the Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) process in bulk coal samples

    NASA Astrophysics Data System (ADS)

    Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis

    2017-01-01

    A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.

  18. Using Monte-Carlo approach for analysis of quantitative and qualitative operation of reservoirs system with regard to the inflow uncertainty

    NASA Astrophysics Data System (ADS)

    Motevalli, Mostafa; Zadbar, Ali; Elyasi, Elham; Jalaal, Maziar

    2015-05-01

    Operation of dams' reservoir systems, as one of the main sources of our country's surface water, has a particular importance. Since the operational hydrological and meteorological parameters of water budget in reservoir systems' operation are indefinite, in order to choose a comprehensive and optimal policy for the operation analysis of these systems, water inflow is considered as the most important hydrological parameter in an uncertain reservoir system. Monte-Carlo approach was applied to study the water inflow impact on the performance of both single and multi-reservoir systems. Doing so, artificial statistics for monthly inflow time series of each production reservoir system and the probable distributions of time, quantitative reliability, vulnerability, and resiliency standards were analyzed in five different simulation and optimization models as the system's efficiency criteria. The reason for choosing Karun 3, Karun 4, and Khersan 1 dams was the need for three dams to be setup as reservoir systems in both serial and parallel forms. The results of the operation criteria analysis indicated that for the operation of the whole system, the best quantitative reliability, vulnerability, and resiliency values were in the optimized single-reservoir model, and the best time reliability value was in the optimized multi-reservoir model. Moreover, the inflow uncertainty had the minimum impact on the quantitative reliability criteria and the maximum impact on the resiliency criteria.

  19. Particle in cell/Monte Carlo collision analysis of the problem of identification of impurities in the gas by the plasma electron spectroscopy method

    NASA Astrophysics Data System (ADS)

    Kusoglu Sarikaya, C.; Rafatov, I.; Kudryavtsev, A. A.

    2016-06-01

    The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the code is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.

  20. Uncertainty of modelled urban peak O3 concentrations and its sensitivity to input data perturbations based on the Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.

    2016-09-01

    A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.

  1. Challenges of Monte Carlo Transport

    SciTech Connect

    Long, Alex Roberts

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  2. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  3. A Monte Carlo Analysis of the Thrust Imbalance for the RSRMV Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle

  4. Diffusion of oxygen interstitials in UO2+x using kinetic Monte Carlo simulations: Role of O/M ratio and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Behera, Rakesh K.; Watanabe, Taku; Andersson, David A.; Uberuaga, Blas P.; Deo, Chaitanya S.

    2016-04-01

    Oxygen interstitials in UO2+x significantly affect the thermophysical properties and microstructural evolution of the oxide nuclear fuel. In hyperstoichiometric Urania (UO2+x), these oxygen interstitials form different types of defect clusters, which have different migration behavior. In this study we have used kinetic Monte Carlo (kMC) to evaluate diffusivities of oxygen interstitials accounting for mono- and di-interstitial clusters. Our results indicate that the predicted diffusivities increase significantly at higher non-stoichiometry (x > 0.01) for di-interstitial clusters compared to a mono-interstitial only model. The diffusivities calculated at higher temperatures compare better with experimental values than at lower temperatures (< 973 K). We have discussed the resulting activation energies achieved for diffusion with all the mono- and di-interstitial models. We have carefully performed sensitivity analysis to estimate the effect of input di-interstitial binding energies on the predicted diffusivities and activation energies. While this article only discusses mono- and di-interstitials in evaluating oxygen diffusion response in UO2+x, future improvements to the model will primarily focus on including energetic definitions of larger stable interstitial clusters reported in the literature. The addition of larger clusters to the kMC model is expected to improve the comparison of oxygen transport in UO2+x with experiment.

  5. Temporal relation between the ADC and DC potential responses to transient focal ischemia in the rat: a Markov chain Monte Carlo simulation analysis.

    PubMed

    King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G

    2003-06-01

    Markov chain Monte Carlo simulation was used in a reanalysis of the longitudinal data obtained by Harris et al. (J Cereb Blood Flow Metab 20:28-36) in a study of the direct current (DC) potential and apparent diffusion coefficient (ADC) responses to focal ischemia. The main purpose was to provide a formal analysis of the temporal relationship between the ADC and DC responses, to explore the possible involvement of a common latent (driving) process. A Bayesian nonlinear hierarchical random coefficients model was adopted. DC and ADC transition parameter posterior probability distributions were generated using three parallel Markov chains created using the Metropolis algorithm. Particular attention was paid to the within-subject differences between the DC and ADC time course characteristics. The results show that the DC response is biphasic, whereas the ADC exhibits monophasic behavior, and that the two DC components are each distinguishable from the ADC response in their time dependencies. The DC and ADC changes are not, therefore, driven by a common latent process. This work demonstrates a general analytical approach to the multivariate, longitudinal data-processing problem that commonly arises in stroke and other biomedical research.

  6. Analysis of Intervention Strategies for Inhalation Exposure to Polycyclic Aromatic Hydrocarbons and Associated Lung Cancer Risk Based on a Monte Carlo Population Exposure Assessment Model

    PubMed Central

    Zhou, Bin; Zhao, Bin

    2014-01-01

    It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making. PMID:24416436

  7. Diffuse X-ray scattering from 4,4'-dimethoxybenzil, C16H14O4: analysis via automatic refinement of a Monte Carlo model.

    PubMed

    Welberry, T R; Heerdegen, A P

    2003-12-01

    A recently developed method for fitting a Monte Carlo computer-simulation model to observed single-crystal diffuse X-ray scattering has been used to study the diffuse scattering in 4,4'-dimethoxybenzil, C16H14O4. A model involving only nine parameters, consisting of seven intermolecular force constants and two intramolecular torsional force constants, was refined to give an agreement factor, omegaR = [sigma omega(deltaI)2/sigma omegaI2(obs)](1/2), of 18.1% for 118 918 data points in two sections of data. The model was purely thermal in nature. The analysis has shown that the most prominent features of the diffraction patterns, viz. diffuse streaks that occur normal to the [101] direction, are due to longitudinal displacement correlations along chains of molecules extending in this direction. These displacements are transmitted from molecule to molecule via contacts involving pairs of hydrogen bonds between adjacent methoxy groups. In contrast to an earlier study of benzil itself, it was not found to be possible to determine, with any degree of certainty, the torsional force constants for rotations about the single bonds in the molecule. It is supposed that this result may be due to the limited data available in the present study.

  8. Depth sensitivity analysis of functional near-infrared spectroscopy measurement using three-dimensional Monte Carlo modelling-based magnetic resonance imaging.

    PubMed

    Mansouri, Chemseddine; L'huillier, Jean-Pierre; Kashou, Nasser H; Humeau, Anne

    2010-05-01

    Theoretical analysis of spatial distribution of near-infrared light propagation in head tissues is very important in brain function measurement, since it is impossible to measure the effective optical path length of the detected signal or the effect of optical fibre arrangement on the regions of measurement or its sensitivity. In this study a realistic head model generated from structure data from magnetic resonance imaging (MRI) was introduced into a three-dimensional Monte Carlo code and the sensitivity of functional near-infrared measurement was analysed. The effects of the distance between source and detector, and of the optical properties of the probed tissues, on the sensitivity of the optical measurement to deep layers of the adult head were investigated. The spatial sensitivity profiles of photons in the head, the so-called banana shape, and the partial mean optical path lengths in the skin-scalp and brain tissues were calculated, so that the contribution of different parts of the head to near-infrared spectroscopy signals could be examined. It was shown that the signal detected in brain function measurements was greatly affected by the heterogeneity of the head tissue and its scattering properties, particularly for the shorter interfibre distances.

  9. Application of a XMM-Newton EPIC Monte Carlo to Analysis And Interpretation of Data for Abell 1689, RXJ0658-55 And the Centaurus Clusters of Galaxies

    SciTech Connect

    Andersson, Karl E.; Peterson, J.R.; Madejski, G.M.; /SLAC /KIPAC, Menlo Park

    2007-04-17

    We propose a new Monte Carlo method to study extended X-ray sources with the European Photon Imaging Camera (EPIC) aboard XMM Newton. The Smoothed Particle Inference (SPI) technique, described in a companion paper, is applied here to the EPIC data for the clusters of galaxies Abell 1689, Centaurus and RXJ 0658-55 (the ''bullet cluster''). We aim to show the advantages of this method of simultaneous spectral-spatial modeling over traditional X-ray spectral analysis. In Abell 1689 we confirm our earlier findings about structure in temperature distribution and produce a high resolution temperature map. We also confirm our findings about velocity structure within the gas. In the bullet cluster, RXJ 0658-55, we produce the highest resolution temperature map ever to be published of this cluster allowing us to trace what looks like the motion of the bullet in the cluster. We even detect a south to north temperature gradient within the bullet itself. In the Centaurus cluster we detect, by dividing up the luminosity of the cluster in bands of gas temperatures, a striking feature to the north-east of the cluster core. We hypothesize that this feature is caused by a subcluster left over from a substantial merger that slightly displaced the core. We conclude that our method is very powerful in determining the spatial distributions of plasma temperatures and very useful for systematic studies in cluster structure.

  10. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: the Ti4O7 Magnéli phase

    SciTech Connect

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; Zhong, Xiaoliang; Kent, Paul R. C.; Heinonen, Olle

    2016-06-07

    The Magnéli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. In this paper, we have examined three low-lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate quantum Monte Carlo methods. We compare our results to those obtained from density functional theory-based methods that include approximate corrections for exchange and correlation. Our results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Finally, a detailed analysis suggests that non-local exchange–correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.

  11. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: The Ti4O7 Magneli phase

    SciTech Connect

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; Zhong, Xiaoling; Kent, Paul R. C.; Heinonen, Olle

    2016-06-07

    The Magneli phase Ti4O7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation. Our results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.

  12. The effects of LIGO detector noise on a 15-dimensional Markov-chain Monte Carlo analysis of gravitational-wave signals

    NASA Astrophysics Data System (ADS)

    Raymond, V.; van der Sluys, M. V.; Mandel, I.; Kalogera, V.; Röver, C.; Christensen, N.

    2010-06-01

    Gravitational-wave signals from inspirals of binary compact objects (black holes and neutron stars) are primary targets of the ongoing searches by ground-based gravitational-wave (GW) interferometers (LIGO, Virgo and GEO-600). We present parameter estimation results from our Markov-chain Monte Carlo code SPINspiral on signals from binaries with precessing spins. Two data sets are created by injecting simulated GW signals either into synthetic Gaussian noise or into LIGO detector data. We compute the 15-dimensional probability-density functions (PDFs) for both data sets, as well as for a data set containing LIGO data with a known, loud artefact ('glitch'). We show that the analysis of the signal in detector noise yields accuracies similar to those obtained using simulated Gaussian noise. We also find that while the Markov chains from the glitch do not converge, the PDFs would look consistent with a GW signal present in the data. While our parameter estimation results are encouraging, further investigations into how to differentiate an actual GW signal from noise are necessary.

  13. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  14. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  15. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86

  16. The structure of the muscle protein complex 4Ca{sup 2+}. Tronponin C*troponin: A Monte Carlo modeling analysis of small-angle X-ray and neutron scattering data

    SciTech Connect

    Olah, G.A.; Trewhella, J.

    1995-11-01

    Analysis of scattering data based on a Monte Carlo integration method was used to obtain a low resolution model of the 4Ca2+.troponin c.troponin I complex. This modeling method allows rapid testing of plausible structures where the best fit model can be ascertained by a comparison between model structure scattering profiles and measured scattering data. In the best fit model, troponin I appears as a spiral structure that wraps about 4CA2+.trophonin C which adopts an extended dumbell conformation similar to that observed in the crystal structures of troponin C. The Monte Carlo modeling method can be applied to other biological systems in which detailed structural information is lacking.

  17. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  18. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  19. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    SciTech Connect

    West, J.T.

    1985-01-01

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  20. Lunar Regolith Albedos Using Monte Carlos

    NASA Technical Reports Server (NTRS)

    Wilson, T. L.; Andersen, V.; Pinsky, L. S.

    2003-01-01

    The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.

  1. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  2. An Overview of the Monte Carlo Application ToolKit (MCATK)

    SciTech Connect

    Trahan, Travis John

    2016-01-07

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  3. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  4. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  5. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  6. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus).

    PubMed

    Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn

    2015-01-01

    Phylogenomic analysis of highly-resolved intraspecific phylogenies obtained from complete mitochondrial DNA genomes has had great success in clarifying relationships within and among human populations, but has found limited application in other wild species. Analytical challenges include assessment of random versus non-random phylogeographic distributions, and quantification of differences in tree topologies among populations. Harp Seals (Pagophilus groenlandicus Erxleben, 1777) have a biogeographic distribution based on four discrete trans-Atlantic breeding and whelping populations located on "fast ice" attached to land in the White Sea, Greenland Sea, the Labrador ice Front, and Southern Gulf of St Lawrence. This East to West distribution provides a set of a priori phylogeographic hypotheses. Outstanding biogeographic questions include the degree of genetic distinctiveness among these populations, in particular between the Greenland Sea and White Sea grounds. We obtained complete coding-region DNA sequences (15,825 bp) for 53 seals. Each seal has a unique mtDNA genome sequence, which differ by 6 ~ 107 substitutions. Six major clades / groups are detectable by parsimony, neighbor-joining, and Bayesian methods, all of which are found in breeding populations on either side of the Atlantic. The species coalescent is at 180 KYA; the most recent clade, which accounts for 66% of the diversity, reflects an expansion during the mid-Wisconsinan glaciation 40~60 KYA. FST is significant only between the White Sea and Greenland Sea or Ice Front populations. Hierarchal AMOVA of 2-, 3-, or 4-island models identifies small but significant ΦSC among populations within groups, but not among groups. A novel Monte-Carlo simulation indicates that the observed distribution of individuals within breeding populations over the phylogenetic tree requires significantly fewer dispersal events than random expectation, consistent with island or a priori East to West 2- or 3-stepping

  7. SU-E-J-09: A Monte Carlo Analysis of the Relationship Between Cherenkov Light Emission and Dose for Electrons, Protons, and X-Ray Photons

    SciTech Connect

    Glaser, A; Zhang, R; Gladstone, D; Pogue, B

    2014-06-01

    Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.

  8. Physiologically-based toxicokinetic model for cadmium using Markov-chain Monte Carlo analysis of concentrations in blood, urine, and kidney cortex from living kidney donors.

    PubMed

    Fransson, Martin Niclas; Barregard, Lars; Sallsten, Gerd; Akerstrom, Magnus; Johanson, Gunnar

    2014-10-01

    The health effects of low-level chronic exposure to cadmium are increasingly recognized. To improve the risk assessment, it is essential to know the relation between cadmium intake, body burden, and biomarker levels of cadmium. We combined a physiologically-based toxicokinetic (PBTK) model for cadmium with a data set from healthy kidney donors to re-estimate the model parameters and to test the effects of gender and serum ferritin on systemic uptake. Cadmium levels in whole blood, blood plasma, kidney cortex, and urinary excretion from 82 men and women were used to calculate posterior distributions for model parameters using Markov-chain Monte Carlo analysis. For never- and ever-smokers combined, the daily systemic uptake was estimated at 0.0063 μg cadmium/kg body weight in men, with 35% increased uptake in women and a daily uptake of 1.2 μg for each pack-year per calendar year of smoking. The rate of urinary excretion from cadmium accumulated in the kidney was estimated at 0.000042 day(-1), corresponding to a half-life of 45 years in the kidneys. We have provided an improved model of cadmium kinetics. As the new parameter estimates derive from a single study with measurements in several compartments in each individual, these new estimates are likely to be more accurate than the previous ones where the data used originated from unrelated data sets. The estimated urinary excretion of cadmium accumulated in the kidneys was much lower than previous estimates, neglecting this finding may result in a marked under-prediction of the true kidney burden.

  9. Deciding on the Number of Classes in Latent Class Analysis and Growth Mixture Modeling: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O.

    2007-01-01

    Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…

  10. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  11. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  12. More about Zener drag studies with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Di Prinzio, Carlos L.; Druetta, Esteban; Nasello, Olga Beatriz

    2013-03-01

    Grain growth (GG) processes in the presence of second-phase and stationary particles have been widely studied but the results found are inconsistent. We present new GG simulations in two- and three-dimensional (2D and 3D) polycrystalline samples with second phase stationary particles, using the Monte Carlo technique. Simulations using values of particle concentration greater than 15% and particle radii different from 1 or 3 are performed, thus covering a range of particle radii and concentrations not previously studied. It is shown that only the results for 3D samples follow Zener's law.

  13. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  14. Uncertainty propagation in a stratospheric model. I - Development of a concise stratospheric model. II - Monte Carlo analysis of imprecisions due to reaction rates. [for ozone depletion prediction

    NASA Technical Reports Server (NTRS)

    Rundel, R. D.; Butler, D. M.; Stolarski, R. S.

    1978-01-01

    The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.

  15. Analysis of the radiation shielding of the bunker of a 230MeV proton cyclotron therapy facility; comparison of analytical and Monte Carlo techniques.

    PubMed

    Sunil, C

    2016-04-01

    The neutron ambient dose equivalent outside the radiation shield of a proton therapy cyclotron vault is estimated using the unshielded dose equivalent rates and the attenuation lengths obtained from the literature and by simulations carried out with the FLUKA Monte Carlo radiation transport code. The source terms derived from the literature and that obtained from the FLUKA calculations differ by a factor of 2-3, while the attenuation lengths obtained from the literature differ by 20-40%. The instantaneous dose equivalent rates outside the shield differ by a few orders of magnitude, not only in comparison with the Monte Carlo simulation results, but also with the results obtained by line of sight attenuation calculations with the different parameters obtained from the literature. The attenuation of neutrons caused by the presence of bulk iron, such as magnet yokes is expected to reduce the dose equivalent by as much as a couple of orders of magnitude outside the shield walls.

  16. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    both the surface crossing tally and the point detector tally converge as 1/N (in variance) and both are asymptotically unbiased. KDE is also applied to Monte Carlo eigenvalue calculations for nuclear reactor analyses. KDE is used to estimate the fission source distribution at the end of each generation and realizations from the estimated source distribution are used as the starting locations for the next generation. The methodology is illustrated by applications to 1D and 3D configurations. The source convergence is measured by the relative source entropy. Significant source convergence improvement is observed for the proposed KDE method compared to the conventional Monte Carlo fission source iteration.

  17. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Lazopoulos, Achilleas

    2006-07-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  18. Analysis of dpa rates in the HFIR reactor vessel using a hybrid Monte Carlo/deterministic method

    SciTech Connect

    Blakeman, Edward

    2016-01-01

    The Oak Ridge High Flux Isotope Reactor (HFIR), which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa), particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiation transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 below to approximately 12 above the axial extent of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%.

  19. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  20. Analysis of dpa Rates in the HFIR Reactor Vessel using a Hybrid Monte Carlo/Deterministic Method

    NASA Astrophysics Data System (ADS)

    Risner, J. M.; Blakeman, E. D.

    2016-02-01

    The Oak Ridge High Flux Isotope Reactor (HFIR), which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa), particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiation transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 in. below to approximately 12 in. above the height of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%. Notice: This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the US Department of Energy. The US Government retains and the publisher, by accepting the article for publication, acknowledges that the US Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for the US Government purposes.

  1. Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

    PubMed

    Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  2. Use of Markov Chain Monte Carlo analysis with a physiologically-based pharmacokinetic model of methylmercury to estimate exposures in US women of childbearing age.

    PubMed

    Allen, Bruce C; Hack, C Eric; Clewell, Harvey J

    2007-08-01

    A Bayesian approach, implemented using Markov Chain Monte Carlo (MCMC) analysis, was applied with a physiologically-based pharmacokinetic (PBPK) model of methylmercury (MeHg) to evaluate the variability of MeHg exposure in women of childbearing age in the U.S. population. The analysis made use of the newly available National Health and Nutrition Survey (NHANES) blood and hair mercury concentration data for women of age 16-49 years (sample size, 1,582). Bayesian analysis was performed to estimate the population variability in MeHg exposure (daily ingestion rate) implied by the variation in blood and hair concentrations of mercury in the NHANES database. The measured variability in the NHANES blood and hair data represents the result of a process that includes interindividual variation in exposure to MeHg and interindividual variation in the pharmacokinetics (distribution, clearance) of MeHg. The PBPK model includes a number of pharmacokinetic parameters (e.g., tissue volumes, partition coefficients, rate constants for metabolism and elimination) that can vary from individual to individual within the subpopulation of interest. Using MCMC analysis, it was possible to combine prior distributions of the PBPK model parameters with the NHANES blood and hair data, as well as with kinetic data from controlled human exposures to MeHg, to derive posterior distributions that refine the estimates of both the population exposure distribution and the pharmacokinetic parameters. In general, based on the populations surveyed by NHANES, the results of the MCMC analysis indicate that a small fraction, less than 1%, of the U.S. population of women of childbearing age may have mercury exposures greater than the EPA RfD for MeHg of 0.1 microg/kg/day, and that there are few, if any, exposures greater than the ATSDR MRL of 0.3 microg/kg/day. The analysis also indicates that typical exposures may be greater than previously estimated from food consumption surveys, but that the variability

  3. Monte Carlo Markov chains analysis of WMAP3 and SDSS data points to broken symmetry inflaton potentials and provides a lower bound on the tensor to scalar ratio

    SciTech Connect

    Destri, C.; Vega, H. J. de; Sanchez, N. G.

    2008-02-15

    We perform a Monte Carlo Markov chains (MCMC) analysis of the available cosmic microwave background (CMB) and large scale structure (LSS) data (including the three years WMAP data) with single field slow-roll new inflation and chaotic inflation models. We do this within our approach to inflation as an effective field theory in the Ginsburg-Landau spirit with fourth degree trinomial potentials in the inflaton field {phi}. We derive explicit formulae and study in detail the spectral index n{sub s} of the adiabatic fluctuations, the ratio r of tensor to scalar fluctuations, and the running index dn{sub s}/dlnk. We use these analytic formulas as hard constraints on n{sub s} and r in the MCMC analysis. Our analysis differs in this crucial aspect from previous MCMC studies in the literature involving the WMAP3 data. Our results are as follows: (i) The data strongly indicate the breaking (whether spontaneous or explicit) of the {phi}{yields}-{phi} symmetry of the inflaton potentials both for new and for chaotic inflation. (ii) Trinomial new inflation naturally satisfies this requirement and provides an excellent fit to the data. (iii) Trinomial chaotic inflation produces the best fit in a very narrow corner of the parameter space. (iv) The chaotic symmetric trinomial potential is almost certainly ruled out (at 95% C.L.). In trinomial chaotic inflation the MCMC runs go towards a potential in the boundary of the parameter space and which resembles a spontaneously symmetry broken potential of new inflation. (v) The above results and further physical analysis here lead us to conclude that new inflation gives the best description of the data. (vi) We find a lower bound for r within trinomial new inflation potentials: r>0.016(95%CL) and r>0.049(68%CL). (vii) The preferred new inflation trinomial potential is a double well, even function of the field with a moderate quartic coupling yielding as most probable values: n{sub s}{approx_equal}0.958, r{approx_equal}0.055. This value

  4. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  5. Dairy goat kids fed liquid diets in substitution of goat milk and slaughtered at different ages: an economic viability analysis using Monte Carlo techniques.

    PubMed

    Knupp, L S; Veloso, C M; Marcondes, M I; Silveira, T S; Silva, A L; Souza, N O; Knupp, S N R; Cannas, A

    2016-03-01

    The aim of this study was to analyze the economic viability of producing dairy goat kids fed liquid diets in alternative of goat milk and slaughtered at two different ages. Forty-eight male newborn Saanen and Alpine kids were selected and allocated to four groups using a completely randomized factorial design: goat milk (GM), cow milk (CM), commercial milk replacer (CMR) and fermented cow colostrum (FC). Each group was then divided into two groups: slaughter at 60 and 90 days of age. The animals received Tifton hay and concentrate ad libitum. The values of total costs of liquid and solid feed plus labor, income and average gross margin were calculated. The data were then analyzed using the Monte Carlo techniques with the @Risk 5.5 software, with 1000 iterations of the variables being studied through the model. The kids fed GM and CMR generated negative profitability values when slaughtered at 60 days (US$ -16.4 and US$ -2.17, respectively) and also at 90 days (US$ -30.8 and US$ -0.18, respectively). The risk analysis showed that there is a 98% probability that profitability would be negative when GM is used. In this regard, CM and FC presented low risk when the kids were slaughtered at 60 days (8.5% and 21.2%, respectively) and an even lower risk when animals were slaughtered at 90 days (5.2% and 3.8%, respectively). The kids fed CM and slaughtered at 90 days presented the highest average gross income (US$ 67.88) and also average gross margin (US$ 18.43/animal). For the 60-day rearing regime to be economically viable, the CMR cost should not exceed 11.47% of the animal-selling price. This implies that the replacer cannot cost more than US$ 0.39 and 0.43/kg for the 60- and 90-day feeding regimes, respectively. The sensitivity analysis showed that the variables with the greatest impact on the final model's results were animal selling price, liquid diet cost, final weight at slaughter and labor. In conclusion, the production of male dairy goat kids can be economically

  6. MONITOR- MONTE CARLO INVESTIGATION OF TRAJECTORY OPERATIONS AND REQUIREMENTS

    NASA Technical Reports Server (NTRS)

    Glass, A. B.

    1994-01-01

    The Monte Carlo Investigation of Trajectory Operations and Requirements (MONITOR) program was developed to perform spacecraft mission maneuver simulations for geosynchronous, single maneuver, and comet encounter type trajectories. MONITOR is a multifaceted program which enables the modeling of various orbital sequences and missions, the generation of Monte Carlo simulation statistics, and the parametric scanning of user requested variables over specified intervals. The MONITOR program has been used primarily to study geosynchronous missions and has the capability to model Space Shuttle deployed satellite trajectories. The ability to perform a Monte Carlo error analysis of user specified orbital parameters using predicted maneuver execution errors can make MONITOR a significant part of any mission planning and analysis system. The MONITOR program can be executed in four operational modes. In the first mode, analytic state covariance matrix propagation is performed using state transition matrices for the coasting and powered burn phases of the trajectory. A two-body central force field is assumed throughout the analysis. Histograms of the final orbital elements and other state dependent variables may be evaluated by a Monte Carlo analysis. In the second mode, geosynchronous missions can be simulated from parking orbit injection through station acquisition. A two-body central force field is assumed throughout the simulation. Nominal mission studies can be conducted; however, the main use of this mode lies in evaluating the behavior of pertinent orbital trajectory parameters by making use of a Monte Carlo analysis. In the third mode, MONITOR performs parametric scans of user-requested variables for a nominal mission. Various orbital sequences may be specified; however, primary use is devoted to geosynchronous missions. A maximum of five variables may be scanned at a time. The fourth mode simulates a mission from orbit injection through comet encounter with optional

  7. Development of a Monte Carlo code for the data analysis of the {sup 18}F(p,α){sup 15}O reaction at astrophysical energies

    SciTech Connect

    Caruso, A.; Cherubini, S.; Spitaleri, C.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Crucillà, V.; Gulino, M.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; and others

    2015-02-24

    Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called 'narrow systems' because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of 'hot hydrogen burning' are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as {sup 13}N and {sup 18}F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of {sup 18}F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of {sup 18}F. Among these, the {sup 18}F(p,α){sup 15}O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the {sup 18}F(p,α){sup 15}O reaction, using a beam of {sup 18}F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code

  8. Keno-Nr a Monte Carlo Code Simulating the Californium -252-SOURCE-DRIVEN Noise Analysis Experimental Method for Determining Subcriticality

    NASA Astrophysics Data System (ADS)

    Ficaro, Edward Patrick

    The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of

  9. Development of a Monte Carlo code for the data analysis of the 18F(p,α)15O reaction at astrophysical energies

    NASA Astrophysics Data System (ADS)

    Caruso, A.; Cherubini, S.; Spitaleri, C.; Crucillà, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; de Séréville, N.

    2015-02-01

    Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called "narrow systems" because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of "hot hydrogen burning" are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as 13N and 18F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of 18F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of 18F . Among these, the 18F(p,α)15O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the 18F(p,α)15O reaction, using a beam of 18F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code developed to be used in the data analysis process.

  10. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  11. Self-learning Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    2017-01-01

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.

  12. Adiabatic optimization versus diffusion Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  13. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  14. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  15. Studying the information content of TMDs using Monte Carlo generators

    SciTech Connect

    Avakian, H.; Matevosyan, H.; Pasquini, B.; Schweitzer, P.

    2015-02-05

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  16. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  17. Monte Carlo and detector simulation in OOP (Object-Oriented Programming)

    SciTech Connect

    Atwood, W.B.; Blankenbecler, R.; Kunz, P. ); Burnett, T.; Storr, K.M. . ECP Div.)

    1990-10-01

    Object-Oriented Programming techniques are explored with an eye toward applications in High Energy Physics codes. Two prototype examples are given: McOOP (a particle Monte Carlo generator) and GISMO (a detector simulation/analysis package).

  18. Mcfast, a Parameterized Fast Monte Carlo for Detector Studies

    NASA Astrophysics Data System (ADS)

    Boehnlein, Amber S.

    McFast is a modularized and parameterized fast Monte Carlo program which is designed to generate physics analysis information for different detector configurations and subdetector designs. McFast is based on simple geometrical object definitions and includes hit generation, parameterized track generation, vertexing, a muon system, electromagnetic calorimetry, and trigger framework for physics studies. Auxiliary tools include a geometry editor, visualization, and an i/o system.

  19. Monte Carlo inversion of seismic data

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The analytic solution to the linear inverse problem provides estimates of the uncertainty of the solution in terms of standard deviations of corrections to a particular solution, resolution of parameter adjustments, and information distribution among the observations. It is shown that Monte Carlo inversion, when properly executed, can provide all the same kinds of information for nonlinear problems. Proper execution requires a relatively uniform sampling of all possible models. The expense of performing Monte Carlo inversion generally requires strategies to improve the probability of finding passing models. Such strategies can lead to a very strong bias in the distribution of models examined unless great care is taken in their application.

  20. Parallel Markov chain Monte Carlo simulations.

    PubMed

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  1. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  2. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  3. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  4. Parallel Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao; Orkoulas, G.

    2007-06-01

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  5. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  6. Monte Carlo simulations of Protein Adsorption

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges

    2008-03-01

    Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.

  7. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  8. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  9. Analysis and design of photobioreactors for microalgae production II: experimental validation of a radiation field simulator based on a Monte Carlo algorithm.

    PubMed

    Heinrich, Josué Miguel; Niizawa, Ignacio; Botta, Fausto Adrián; Trombert, Alejandro Raúl; Irazoqui, Horacio Antonio

    2012-01-01

    In a previous study, we developed a methodology to assess the intrinsic optical properties governing the radiation field in algae suspensions. With these properties at our disposal, a Monte Carlo simulation program is developed and used in this study as a predictive autonomous program applied to the simulation of experiments that reproduce the common illumination conditions that are found in processes of large scale production of microalgae, especially when using open ponds such as raceway ponds. The simulation module is validated by comparing the results of experimental measurements made on artificially illuminated algal suspension with those predicted by the Monte Carlo program. This experiment deals with a situation that resembles that of an open pond or that of a raceway pond, except for the fact that for convenience, the experimental arrangement appears as if those reactors were turned upside down. It serves the purpose of assessing to what extent the scattering phenomena are important for the prediction of the spatial distribution of the radiant energy density. The simulation module developed can be applied to compute the local energy density inside photobioreactors with the goal to optimize its design and their operating conditions.

  10. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  11. Fixed-node diffusion Monte Carlo method for lithium systems

    NASA Astrophysics Data System (ADS)

    Rasch, K. M.; Mitas, L.

    2015-07-01

    We study lithium systems over a range of a number of atoms, specifically atomic anion, dimer, metallic cluster, and body-centered-cubic crystal, using the fixed-node diffusion Monte Carlo method. The focus is on analysis of the fixed-node errors of each system, and for that purpose we test several orbital sets in order to provide the most accurate nodal hypersurfaces. The calculations include both core and valence electrons in order to avoid any possible impact by pseudopotentials. To quantify the fixed-node errors, we compare our results to other highly accurate calculations, and wherever available, to experimental observations. The results for these Li systems show that the fixed-node diffusion Monte Carlo method achieves accurate total energies, recovers 96 -99 % of the correlation energy, and estimates binding energies with errors bounded by 0.1 eV /at .

  12. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  13. APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula

    SciTech Connect

    Hwang, M.; Bae, S.; Chung, B. D.

    2012-07-01

    An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)

  14. Comparison of deterministic and Monte Carlo methods in shielding design.

    PubMed

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  15. Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.

    2003-11-15

    The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.

  16. Monte Carlo simulations of electron transport in strongly attaching gases

    NASA Astrophysics Data System (ADS)

    Petrovic, Zoran; Miric, Jasmina; Simonovic, Ilija; Bosnjakovic, Danko; Dujko, Sasa

    2016-09-01

    Extensive loss of electrons in strongly attaching gases imposes significant difficulties in Monte Carlo simulations at low electric field strengths. In order to compensate for such losses, some kind of rescaling procedures must be used. In this work, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The discrete rescaling procedure is based on duplication of electrons randomly chosen from the remaining swarm at certain discrete time steps. The continuous rescaling procedure employs a dynamically defined fictitious ionization process with the constant collision frequency chosen to be equal to the attachment collision frequency. These procedures should not in any way modify the distribution function. Monte Carlo calculations of transport coefficients for electrons in SF6 and CF3I are performed in a wide range of electric field strengths. However, special emphasis is placed upon the analysis of transport phenomena in the limit of lower electric fields where the transport properties are strongly affected by electron attachment. Two important phenomena arise: (1) the reduction of the mean energy with increasing E/N for electrons in SF6, and (2) the occurrence of negative differential conductivity in the bulk drift velocity of electrons in both SF6 and CF3I.

  17. Improved diffusion coefficients generated from Monte Carlo codes

    SciTech Connect

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-07-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  18. Accelerating Monte Carlo power studies through parametric power estimation.

    PubMed

    Ueckert, Sebastian; Karlsson, Mats O; Hooker, Andrew C

    2016-04-01

    Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.

  19. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.

  20. Noninvasive optical measurement of bone marrow lesions: a Monte Carlo study on visible human dataset

    NASA Astrophysics Data System (ADS)

    Su, Yu; Li, Ting

    2016-03-01

    Bone marrow is both the main hematopoietic and important immune organ. Bone marrow lesions (BMLs) may cause a series of severe complications and even myeloma. The traditional diagnosis of BMLs rely on mostly bone marrow biopsy/ puncture, and sometimes MRI, X-ray, and etc., which are either invasive and dangerous, or ionizing and costly. A diagnosis technology with advantages in noninvasive, safe, real-time continuous detection, and low cost is requested. Here we reported our preliminary exploration of feasibility verification of using near-infrared spectroscopy (NIRS) in clinical diagnosis of BMLs by Monte Carlo simulation study. We simulated and visualized the light propagation in the bone marrow quantitatively with a Monte Carlo simulation software for 3D voxelized media and Visible Chinese Human data set, which faithfully represents human anatomy. The results indicate that bone marrow actually has significant effects on light propagation. According to a sequence of simulation and data analysis, the optimal source-detector separation was suggested to be narrowed down to 2.8-3.2cm, at which separation the spatial sensitivity distribution of NIRS cover the most region of bone marrow with high signal-to-noise ratio. The display of the sources and detectors were optimized as well. This study investigated the light transport in spine addressing to the BMLs detection issue and reported the feasibility of NIRS detection of BMLs noninvasively in theory. The optimized probe design of the coming NIRS-based BMLs detector is also provided.

  1. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  2. Monte Carlo Simulation of Counting Experiments.

    ERIC Educational Resources Information Center

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  3. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  4. Monte Carlo studies of uranium calorimetry

    SciTech Connect

    Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.

    1985-01-01

    Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.

  5. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  6. Search and Rescue Monte Carlo Simulation.

    DTIC Science & Technology

    1985-03-01

    confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)

  7. Monte Carlo studies of ARA detector optimization

    NASA Astrophysics Data System (ADS)

    Stockham, Jessica

    2013-04-01

    The Askaryan Radio Array (ARA) is a neutrino detector deployed in the Antarctic ice sheet near the South Pole. The array is designed to detect ultra high energy neutrinos in the range of 0.1-10 EeV. Detector optimization is studied using Monte Carlo simulations.

  8. Inchworm Monte Carlo for exact non-adiabatic dynamics. II. Benchmarks and comparison with established methods

    NASA Astrophysics Data System (ADS)

    Chen, Hsing-Ta; Cohen, Guy; Reichman, David R.

    2017-02-01

    In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.

  9. Time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B

    2009-01-01

    We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

  10. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the geant4 Monte Carlo code

    PubMed Central

    Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe

    2015-01-01

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the

  11. Composite sequential Monte Carlo test for post-market vaccine safety surveillance.

    PubMed

    Silva, Ivair R

    2016-04-30

    Group sequential hypothesis testing is now widely used to analyze prospective data. If Monte Carlo simulation is used to construct the signaling threshold, the challenge is how to manage the type I error probability for each one of the multiple tests without losing control on the overall significance level. This paper introduces a valid method for a true management of the alpha spending at each one of a sequence of Monte Carlo tests. The method also enables the use of a sequential simulation strategy for each Monte Carlo test, which is useful for saving computational execution time. Thus, the proposed procedure allows for sequential Monte Carlo test in sequential analysis, and this is the reason that it is called 'composite sequential' test. An upper bound for the potential power losses from the proposed method is deduced. The composite sequential design is illustrated through an application for post-market vaccine safety surveillance data.

  12. Quasimodes instability analysis of uncertain asymmetric rotor system based on 3D solid element model

    NASA Astrophysics Data System (ADS)

    Zuo, Yanfei; Wang, Jianjun; Ma, Weimeng

    2017-03-01

    Uncertainties are considered in the equation of motion of an asymmetric rotor system. Based on Hill's determinant method, quasimodes stability analysis with uncertain parameters is used to get stochastic boundaries of unstable regions. Firstly, A 3D finite element rotor model was built in rotating frame with four parameterized coefficients, which is assumed as random parameters representing the uncertainties existing in the rotor system. Then the influences of uncertain coefficients on the distribution of the unstable region boundaries are analyzed. The results show that uncertain parameters have various influences on the size, boundary and number of unstable regions. At last, the statistic results of the minimum and maximum spin speeds of unstable regions were got by Monte Carlo simulation. The used method is suitable for real engineering rotor system, because arbitrary configuration of rotors can be modeled by 3D finite element.

  13. Monte Carlo analysis of megavoltage x-ray interaction-induced signal and noise in cadmium tungstate detectors for cargo container inspection

    NASA Astrophysics Data System (ADS)

    Kim, J.; Park, J.; Kim, J.; Kim, D. W.; Yun, S.; Lim, C. H.; Kim, H. K.

    2016-11-01

    For the purpose of designing an x-ray detector system for cargo container inspection, we have investigated the energy-absorption signal and noise in CdWO4 detectors for megavoltage x-ray photons. We describe the signal and noise measures, such as quantum efficiency, average energy absorption, Swank noise factor, and detective quantum efficiency (DQE), in terms of energy moments of absorbed energy distributions (AEDs) in a detector. The AED is determined by using a Monte Carlo simulation. The results show that the signal-related measures increase with detector thickness. However, the improvement of Swank noise factor with increasing thickness is weak, and this energy-absorption noise characteristic dominates the DQE performance. The energy-absorption noise mainly limits the signal-to-noise performance of CdWO4 detectors operated at megavoltage x-ray beam.

  14. Influence of a fat layer on the near infrared spectra of human muscle: quantitative analysis based on two-layered Monte Carlo simulations and phantom experiments

    NASA Technical Reports Server (NTRS)

    Yang, Ye; Soyemi, Olusola O.; Landry, Michelle R.; Soller, Babs R.

    2005-01-01

    The influence of fat thickness on the diffuse reflectance spectra of muscle in the near infrared (NIR) region is studied by Monte Carlo simulations of a two-layer structure and with phantom experiments. A polynomial relationship was established between the fat thickness and the detected diffuse reflectance. The influence of a range of optical coefficients (absorption and reduced scattering) for fat and muscle over the known range of human physiological values was also investigated. Subject-to-subject variation in the fat optical coefficients and thickness can be ignored if the fat thickness is less than 5 mm. A method was proposed to correct the fat thickness influence. c2005 Optical Society of America.

  15. Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy

    SciTech Connect

    Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W

    2002-02-19

    Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.

  16. SU-E-T-235: Monte Carlo Analysis of the Dose Enhancement in the Scalp of Patients Due to Titanium Plate Backscatter During Post-Operative Radiotherapy

    SciTech Connect

    Hardin, M; Elson, H; Lamba, M; Wolf, E; Warnick, R

    2014-06-01

    Purpose: To quantify the clinically observed dose enhancement adjacent to cranial titanium fixation plates during post-operative radiotherapy. Methods: Irradiation of a titanium burr hole cover was simulated using Monte Carlo code MCNPX for a 6 MV photon spectrum to investigate backscatter dose enhancement due to increased production of secondary electrons within the titanium plate. The simulated plate was placed 3 mm deep in a water phantom, and dose deposition was tallied for 0.2 mm thick cells adjacent to the entrance and exit sides of the plate. These results were compared to a simulation excluding the presence of the titanium to calculate relative dose enhancement on the entrance and exit sides of the plate. To verify simulated results, two titanium burr hole covers (Synthes, Inc. and Biomet, Inc.) were irradiated with 6 MV photons in a solid water phantom containing GafChromic MD-55 film. The phantom was irradiated on a Varian 21EX linear accelerator at multiple gantry angles (0–180 degrees) to analyze the angular dependence of the backscattered radiation. Relative dose enhancement was quantified using computer software. Results: Monte Carlo simulations indicate a relative difference of 26.4% and 7.1% on the entrance and exit sides of the plate respectively. Film dosimetry results using a similar geometry indicate a relative difference of 13% and -10% on the entrance and exit sides of the plate respectively. Relative dose enhancement on the entrance side of the plate decreased with increasing gantry angle from 0 to 180 degrees. Conclusion: Film and simulation results demonstrate an increase in dose to structures immediately adjacent to cranial titanium fixation plates. Increased beam obliquity has shown to alleviate dose enhancement to some extent. These results are consistent with clinically observed effects.

  17. Monte Carlo simulation of light propagation in the adult brain

    NASA Astrophysics Data System (ADS)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter

    2004-06-01

    When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.

  18. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the GEANT4 Monte Carlo code

    SciTech Connect

    Guan, Fada; Peeler, Christopher; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Mohan, Radhe; Titt, Uwe; Bronk, Lawrence; Geng, Changran; Grosshans, David

    2015-11-15

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to

  19. Monte Carlo Particle Transport: Algorithm and Performance Overview

    SciTech Connect

    Gentile, N; Procassini, R; Scott, H

    2005-06-02

    Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.

  20. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  1. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  2. Monte Carlo simulations on SIMD computer architectures

    SciTech Connect

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  3. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  4. Fission Matrix Capability for MCNP Monte Carlo

    NASA Astrophysics Data System (ADS)

    Brown, Forrest; Carney, Sean; Kiedrowski, Brian; Martin, William

    2014-06-01

    We describe recent experience and results from implementing a fission matrix capability into the MCNP Monte Carlo code. The fission matrix can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission neutron source distribution. It can also be used to accelerate the convergence of the power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used in MCNP to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. Past difficulties and limitations of the fission matrix approach are overcome with a new sparse representation of the matrix, permitting much larger and more accurate fission matrix representations. The new fission matrix capabilities provide a significant advance in the state-of-the-art for Monte Carlo criticality calculations.

  5. Quantum Monte Carlo applied to solids

    SciTech Connect

    Shulenburger, Luke; Mattsson, Thomas R.

    2013-12-01

    We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.

  6. A Monte Carlo based nodal diffusion model for criticality analysis, and, Application of high-order cross section homogenization method to two-group nodal diffusion

    NASA Astrophysics Data System (ADS)

    Ilas, Germina

    In the first part, an accurate and fast computational method is presented as an alternative to the Monte Carlo or deterministic transport theory codes currently used to determine the subcriticality of spent fuel storage lattices. The method is capable of analyzing storage configurations with simple or complex lattice cell geometry. It is developed based on two-group nodal diffusion theory, with the nodal cross sections and discontinuity factors determined from continuous-energy Monte Carlo simulations of each unique node (spent fuel assembly type). Three different approaches are developed to estimate the node-averaged diffusion coefficient. The applicability and the accuracy of the nodal method are assessed in two-dimensional geometry through several benchmark configurations typical at Savannah River Site. It is shown that the multiplication constant of the analyzed configurations is within 1% of the MCNP results. In the second part, the high-order cross section homogenization method, recently developed by McKinley and Rahnema, is implemented in the context of two-group nodal diffusion theory. The method corrects the generalized equivalence theory homogenization parameters for the effect of the core environment. The reconstructed fine-mesh (fuel pin) flux and power distributions are a natural byproduct of this method. The method was not tested for multigroup problems, where it was assumed that the multigroup flux expansion in terms of the perturbation parameter is a convergent series. Here the applicability of the method to two-group problems is studied, and it is shown that the perturbation expansion series converges for the multigroup case. A two-group nodal diffusion code with a bilinear intra-nodal flux shape is developed for the implementation of the high-order homogenization method in the context of the generalized equivalence theory. The method is tested by using as a benchmark a core configuration typical of a BWR in slab geometry, which has large

  7. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-11-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal. Dose

  8. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  9. Recovering intrinsic fluorescence by Monte Carlo modeling.

    PubMed

    Müller, Manfred; Hendriks, Benno H W

    2013-02-01

    We present a novel way to recover intrinsic fluorescence in turbid media based on Monte Carlo generated look-up tables and making use of a diffuse reflectance measurement taken at the same location. The method has been validated on various phantoms with known intrinsic fluorescence and is benchmarked against photon-migration methods. This new method combines more flexibility in the probe design with fast reconstruction and showed similar reconstruction accuracy as found in other reconstruction methods.

  10. Monte Carlo approach to Estrada index

    NASA Astrophysics Data System (ADS)

    Gutman, Ivan; Radenković, Slavko; Graovac, Ante; Plavšić, Dejan

    2007-09-01

    Let G be a graph on n vertices, and let λ1, λ2, …, λn be its eigenvalues. The Estrada index of G is a recently introduced molecular structure descriptor, defined as EE=∑i=1ne. Using a Monte Carlo approach, and treating the graph eigenvalues as random variables, we deduce approximate expressions for EE, in terms of the number of vertices and number of edges, of very high accuracy.

  11. Accelerated Monte Carlo by Embedded Cluster Dynamics

    NASA Astrophysics Data System (ADS)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  12. Treatment planning for a small animal using Monte Carlo simulation

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.

    2007-12-15

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  13. Treatment planning for a small animal using Monte Carlo simulation.

    PubMed

    Chow, James C L; Leung, Michael K K

    2007-12-01

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  14. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  15. Monte Carlo dose verification for intensity-modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Li, X. Allen; Ma, Lijun; Naqvi, Shahid; Shih, Rompin; Yu, Cedric

    2001-09-01

    Intensity-modulated arc therapy (IMAT), a technique which combines beam rotation and dynamic multileaf collimation, has been implemented in our clinic. Dosimetric errors can be created by the inability of the planning system to accurately account for the effects of tissue inhomogeneities and physical characteristics of the multileaf collimator (MLC). The objective of this study is to explore the use of Monte Carlo (MC) simulation for IMAT dose verification. The BEAM/DOSXYZ Monte Carlo system was implemented to perform dose verification for the IMAT treatment. The implementation includes the simulation of the linac head/MLC (Elekta SL20), the conversion of patient CT images and beam arrangement for 3D dose calculation, the calculation of gantry rotation and leaf motion by a series of static beams and the development of software to automate the entire MC process. The MC calculations were verified by measurements for conventional beam settings. The agreement was within 2%. The IMAT dose distributions generated by a commercial forward planning system (RenderPlan, Elekta) were compared with those calculated by the MC package. For the cases studied, discrepancies of over 10% were found between the MC and the RenderPlan dose calculations. These discrepancies were due in part to the inaccurate dose calculation of the RenderPlan system. The computation time for the IMAT MC calculation was in the range of 20-80 min on 15 Pentium-III computers. The MC method was also useful in verifying the beam apertures used in the IMAT treatments.

  16. Magnetic properties of double perovskite Sr2RuHoO6: Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Nid-bahami, A.; El Kenz, A.; Benyoussef, A.; Bahmad, L.; Hamedoun, M.; El Moussaoui, H.

    2016-11-01

    In this paper, we have studied the double perovskite complex Sr2RuHoO6 (SRHO) using the Mean-Field Approximation (MFA) and Monte Carlo Simulation (MCS). Firstly, we study the ground state of the phase diagrams depending on the exchange couplings and the crystal fields, on the other hand the magnetic properties has been studied. The obtained results by MFA were compared with those obtained using a MCS. Secondly, we have presented the results for finite sizes analysis, of the magnetization and the susceptibility as a function of reduced temperature. Finally, we obtain the critical reduced temperature and critical values of the exponents υ = 0 . 602 ± 0 . 011 , γ = 1 . 179 ± 0 . 022 and β = 0 . 296 ± 0 . 018 which these values are nearest to that of 3D Ising model (υ = 0 ṡ 632 , γ = 1 ṡ 23 and β = 0 ṡ 325).

  17. A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.

  18. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  19. Recoil proton, alpha particle, and heavy ion impacts on microdosimetry and RBE of fast neutrons: analysis of kerma spectra calculated by Monte Carlo simulation.

    PubMed

    Pignol, J P; Slabbert, J

    2001-02-01

    Fast neutrons (FN) have a higher radio-biological effectiveness (RBE) compared with photons, however the mechanism of this increase remains a controversial issue. RBE variations are seen among various FN facilities and at the same facility when different tissue depths or thicknesses of hardening filters are used. These variations lead to uncertainties in dose reporting as well as in the comparisons of clinical results. Besides radiobiology and microdosimetry, another powerful method for the characterization of FN beams is the calculation of total proton and heavy ion kerma spectra. FLUKA and MCNP Monte Carlo code were used to simulate these kerma spectra following a set of microdosimetry measurements performed at the National Accelerator Centre. The calculated spectra confirmed major classical statements: RBE increase is linked to both slow energy protons and alpha particles yielded by (n,alpha) reactions on carbon and oxygen nuclei. The slow energy protons are produced by neutrons having an energy between 10 keV and 10 MeV, while the alpha particles are produced by neutrons having an energy between 10 keV and 15 MeV. Looking at the heavy ion kerma from <15 MeV and the proton kerma from neutrons <10 MeV, it is possible to anticipate y* and RBE trends.

  20. Conformational analysis of bis(methylthio)methane and diethyl sulfide molecules in the liquid phase: reverse Monte Carlo studies using classical interatomic potential functions

    NASA Astrophysics Data System (ADS)

    Gereben, Orsolya; Pusztai, László

    2013-11-01

    Series of flexible molecule reverse Monte Carlo calculations, using bonding and non-bonding interatomic potential functions (FMP-RMC), were performed starting from previous molecular dynamics results that had applied the OPLS-AA and EncadS force fields. During RMC modeling, the experimental x-ray total scattering structure factor was approached. The discrepancy between experimental and calculated structure factors, in comparison with the molecular dynamics results, decreased substantially in each case. The room temperature liquid structure of bis(methylthio)methane is excellently described by the FMP-RMC simulation that applied the EncadS force field parameters. The main conformer was found to be AG with 55.2%, followed by 37.2% of G+G+ (G-G-) and 7.6% of AA; the stability of the G+G+ (G-G-) conformer is most probably caused by the anomer effect. The liquid structure of diethyl sulfide can be best described by applying the OPLS-AA force field parameters during FMP-RMC simulation, although in this case the force field parameters were found to be not fully compatible with experimental data. Here, the two main conformers are AG (50.6%) and the AA (40%). In addition to findings on the actual real systems, a fairly detailed comparison between traditional and FMP-RMC methodology is provided.

  1. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties.

    PubMed

    Molinelli, S; Mairani, A; Mirandola, A; Vilches Freixas, G; Tessonnier, T; Giordanengo, S; Parodi, K; Ciocca, M; Orecchia, R

    2013-06-07

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  2. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties

    NASA Astrophysics Data System (ADS)

    Molinelli, S.; Mairani, A.; Mirandola, A.; Vilches Freixas, G.; Tessonnier, T.; Giordanengo, S.; Parodi, K.; Ciocca, M.; Orecchia, R.

    2013-06-01

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  3. A DNA sequence evolution analysis generalized by simulation and the markov chain monte carlo method implicates strand slippage in a majority of insertions and deletions.

    PubMed

    Nishizawa, Manami; Nishizawa, Kazuhisa

    2002-12-01

    To study the mechanisms for local evolutionary changes in DNA sequences involving slippage-type insertions and deletions, an alignment approach is explored that can consider the posterior probabilities of alignment models. Various patterns of insertion and deletion that can link the ancestor and descendant sequences are proposed and evaluated by simulation and compared by the Markov chain Monte Carlo (MCMC) method. Analyses of pseudogenes reveal that the introduction of the parameters that control the probability of slippage-type events markedly augments the probability of the observed sequence evolution, arguing that a cryptic involvement of slippage occurrences is manifested as insertions and deletions of short nucleotide segments. Strikingly, approximately 80% of insertions in human pseudogenes and approximately 50% of insertions in murids pseudogenes are likely to be caused by the slippage-mediated process, as represented by BC in ABCD --> ABCBCD. We suggest that, in both human and murids, even very short repetitive motifs, such as CAGCAG, CACACA, and CCCC, have approximately 10- to 15-fold susceptibility to insertions and deletions, compared to nonrepetitive sequences. Our protocol, namely, indel-MCMC, thus seems to be a reasonable approach for statistical analyses of the early phase of microsatellite evolution.

  4. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

    NASA Astrophysics Data System (ADS)

    Ezzati, A. O.; Sohrabpour, M.

    2013-02-01

    In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000.

  5. Evaluation of six scatter correction methods based on spectral analysis in (99m)Tc SPECT imaging using SIMIND Monte Carlo simulation.

    PubMed

    Asl, Mahsa Noori; Sadremomtaz, Alireza; Bitarafan-Rajabi, Ahmad

    2013-10-01

    Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in (99m)Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR) and relative noise of the background (RNB) are considered. Except for the dual-photopeak window (DPW) method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW) method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.

  6. How the transition frequencies of microtubule dynamic instability (nucleation, catastrophe, and rescue) regulate microtubule dynamics in interphase and mitosis: analysis using a Monte Carlo computer simulation.

    PubMed Central

    Gliksman, N R; Skibbens, R V; Salmon, E D

    1993-01-01

    Microtubules (MTs) in newt mitotic spindles grow faster than MTs in the interphase cytoplasmic microtubule complex (CMTC), yet spindle MTs do not have the long lengths or lifetimes of the CMTC microtubules. Because MTs undergo dynamic instability, it is likely that changes in the durations of growth or shortening are responsible for this anomaly. We have used a Monte Carlo computer simulation to examine how changes in the number of MTs and changes in the catastrophe and rescue frequencies of dynamic instability may be responsible for the cell cycle dependent changes in MT characteristics. We used the computer simulations to model interphase-like or mitotic-like MT populations on the basis of the dynamic instability parameters available from newt lung epithelial cells in vivo. We started with parameters that produced MT populations similar to the interphase newt lung cell CMTC. In the simulation, increasing the number of MTs and either increasing the frequency of catastrophe or decreasing the frequency of rescue reproduced the changes in MT dynamics measured in vivo between interphase and mitosis. Images PMID:8298190

  7. Monte Carlo analysis of transient electron transport in wurtzite Zn{sub 1−x}Mg{sub x}O combined with first principles calculations

    SciTech Connect

    Wang, Ping; Hu, Linlin; Shan, Xuefei; Yang, Yintang; Song, Jiuxu; Guo, Lixin; Zhang, Zhiyong

    2015-01-15

    Transient characteristics of wurtzite Zn{sub 1−x}Mg{sub x}O are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn{sub 1−x}Mg{sub x}O (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 10{sup 7}, 2.133 × 10{sup 7}, 1.889 × 10{sup 7}, 1.295 × 10{sup 7} cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn{sub 0.889}Mg{sub 0.111}O and Zn{sub 0.833}Mg{sub 0.167}O respectively, while it is not observed for Zn{sub 0.806}Mg{sub 0.194}O and Zn{sub 0.75}Mg{sub 0.25}O even at 3000 kV/cm which is especially important for high frequency devices.

  8. Monte Carlo dose calculations in advanced radiotherapy

    NASA Astrophysics Data System (ADS)

    Bush, Karl Kenneth

    The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of

  9. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that th