Sample records for carlo simulation procedure

  1. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  2. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  3. Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.

    ERIC Educational Resources Information Center

    Oulman, Charles S.; Lee, Motoko Y.

    Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…

  4. Steady-State Electrodiffusion from the Nernst-Planck Equation Coupled to Local Equilibrium Monte Carlo Simulations.

    PubMed

    Boda, Dezső; Gillespie, Dirk

    2012-03-13

    We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.

  5. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  6. An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…

  7. How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.

    PubMed

    Lecca, Paola

    2018-01-01

    We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.

  8. Non-vascular interventional procedures: effective dose to patient and equivalent dose to abdominal organs by means of DICOM images and Monte Carlo simulation.

    PubMed

    Longo, Mariaconcetta; Marchioni, Chiara; Insero, Teresa; Donnarumma, Raffaella; D'Adamo, Alessandro; Lucatelli, Pierleone; Fanelli, Fabrizio; Salvatori, Filippo Maria; Cannavale, Alessandro; Di Castro, Elisabetta

    2016-03-01

    This study evaluates X-ray exposure in patient undergoing abdominal extra-vascular interventional procedures by means of Digital Imaging and COmmunications in Medicine (DICOM) image headers and Monte Carlo simulation. The main aim was to assess the effective and equivalent doses, under the hypothesis of their correlation with the dose area product (DAP) measured during each examination. This allows to collect dosimetric information about each patient and to evaluate associated risks without resorting to in vivo dosimetry. The dose calculation was performed in 79 procedures through the Monte Carlo simulator PCXMC (A PC-based Monte Carlo program for calculating patient doses in medical X-ray examinations), by using the real geometrical and dosimetric irradiation conditions, automatically extracted from DICOM headers. The DAP measurements were also validated by using thermoluminescent dosemeters on an anthropomorphic phantom. The expected linear correlation between effective doses and DAP was confirmed with an R(2) of 0.974. Moreover, in order to easily calculate patient doses, conversion coefficients that relate equivalent doses to measurable quantities, such as DAP, were obtained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Generating Nonnormal Multivariate Data Using Copulas: Applications to SEM.

    PubMed

    Mair, Patrick; Satorra, Albert; Bentler, Peter M

    2012-07-01

    This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo evaluation of structural equation models within the context of nonnormal data. The new procedure for nonnormal data simulation is theoretically described and also implemented in the widely used R environment. The quality of the method is assessed by Monte Carlo simulations. A 1-sample test on the observed covariance matrix based on the copula methodology is proposed. This new test for evaluating the quality of a simulation is defined through a particular structural model specification and is robust against normality violations.

  10. Hypothesis testing of scientific Monte Carlo calculations.

    PubMed

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  11. Hypothesis testing of scientific Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  12. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  13. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    ERIC Educational Resources Information Center

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  14. A Simulation Game for the Study of State Policies.

    ERIC Educational Resources Information Center

    Enzer, Selwyn; And Others

    A simulated planning conference used both informed experts (simulating state planners and societal groups) and "Monte Carlo" procedures (simulating random events) to identify some possible futures for the state of Connecticut and to examine rational planning behavior patterns. In the simulation the participants were divided into two…

  15. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  16. Electron swarm properties under the influence of a very strong attachment in SF6 and CF3I obtained by Monte Carlo rescaling procedures

    NASA Astrophysics Data System (ADS)

    Mirić, J.; Bošnjaković, D.; Simonović, I.; Petrović, Z. Lj; Dujko, S.

    2016-12-01

    Electron attachment often imposes practical difficulties in Monte Carlo simulations, particularly under conditions of extensive losses of seed electrons. In this paper, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The two procedures are implemented in our Monte Carlo code with an aim of analyzing electron transport processes and attachment induced phenomena in sulfur-hexafluoride (SF6) and trifluoroiodomethane (CF3I). Though calculations have been performed over the entire range of reduced electric fields E/n 0 (where n 0 is the gas number density) where experimental data are available, the emphasis is placed on the analysis below critical (electric gas breakdown) fields and under conditions when transport properties are greatly affected by electron attachment. The present calculations of electron transport data for SF6 and CF3I at low E/n 0 take into account the full extent of the influence of electron attachment and spatially selective electron losses along the profile of electron swarm and attempts to produce data that may be used to model this range of conditions. The results of Monte Carlo simulations are compared to those predicted by the publicly available two term Boltzmann solver BOLSIG+. A multitude of kinetic phenomena in electron transport has been observed and discussed using physical arguments. In particular, we discuss two important phenomena: (1) the reduction of the mean energy with increasing E/n 0 for electrons in \\text{S}{{\\text{F}}6} and (2) the occurrence of negative differential conductivity (NDC) in the bulk drift velocity only for electrons in both \\text{S}{{\\text{F}}6} and CF3I. The electron energy distribution function, spatial variations of the rate coefficient for electron attachment and average energy as well as spatial profile of the swarm are calculated and used to understand these phenomena.

  17. Combining Heterogeneous Correlation Matrices: Simulation Analysis of Fixed-Effects Methods

    ERIC Educational Resources Information Center

    Hafdahl, Adam R.

    2008-01-01

    Monte Carlo studies of several fixed-effects methods for combining and comparing correlation matrices have shown that two refinements improve estimation and inference substantially. With rare exception, however, these simulations have involved homogeneous data analyzed using conditional meta-analytic procedures. The present study builds on…

  18. Teaching Classical Statistical Mechanics: A Simulation Approach.

    ERIC Educational Resources Information Center

    Sauer, G.

    1981-01-01

    Describes a one-dimensional model for an ideal gas to study development of disordered motion in Newtonian mechanics. A Monte Carlo procedure for simulation of the statistical ensemble of an ideal gas with fixed total energy is developed. Compares both approaches for a pseudoexperimental foundation of statistical mechanics. (Author/JN)

  19. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  20. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  1. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  2. MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yan; Gohar, Yousry

    2015-11-01

    In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less

  3. Modelling of electronic excitation and radiation in the Direct Simulation Monte Carlo Macroscopic Chemistry Method

    NASA Astrophysics Data System (ADS)

    Goldsworthy, M. J.

    2012-10-01

    One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.

  4. Probabilistic wind/tornado/missile analyses for hazard and fragility evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Y.J.; Reich, M.

    Detailed analysis procedures and examples are presented for the probabilistic evaluation of hazard and fragility against high wind, tornado, and tornado-generated missiles. In the tornado hazard analysis, existing risk models are modified to incorporate various uncertainties including modeling errors. A significant feature of this paper is the detailed description of the Monte-Carlo simulation analyses of tornado-generated missiles. A simulation procedure, which includes the wind field modeling, missile injection, solution of flight equations, and missile impact analysis, is described with application examples.

  5. DOSE COEFFICIENTS FOR LIVER CHEMOEMBOLISATION PROCEDURES USING MONTE CARLO CODE.

    PubMed

    Karavasilis, E; Dimitriadis, A; Gonis, H; Pappas, P; Georgiou, E; Yakoumakis, E

    2016-12-01

    The aim of the present study is the estimation of radiation burden during liver chemoembolisation procedures. Organ dose and effective dose conversion factors, normalised to dose-area product (DAP), were estimated for chemoembolisation procedures using a Monte Carlo transport code in conjunction with an adult mathematical phantom. Exposure data from 32 patients were used to determine the exposure projections for the simulations. Equivalent organ (H T ) and effective (E) doses were estimated using individual DAP values. The organs receiving the highest amount of doses during these exams were lumbar spine, liver and kidneys. The mean effective dose conversion factor was 1.4 Sv Gy -1 m -2 Dose conversion factors can be useful for patient-specific radiation burden during chemoembolisation procedures. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. A Procedure for Testing the Difference between Effect Sizes.

    ERIC Educational Resources Information Center

    Lambert, Richard G.; Flowers, Claudia

    A special case of the homogeneity of effect size test, as applied to pairwise comparisons of standardized mean differences, was evaluated. Procedures for comparing pairs of pretest to posttest effect sizes, as well as pairs of treatment versus control group effect sizes, were examined. Monte Carlo simulation was used to generate Type I error rates…

  7. On zero variance Monte Carlo path-stretching schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lux, I.

    1983-08-01

    A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game.

  8. Pairwise Multiple Comparisons in Single Group Repeated Measures Analysis.

    ERIC Educational Resources Information Center

    Barcikowski, Robert S.; Elliott, Ronald S.

    Research was conducted to provide educational researchers with a choice of pairwise multiple comparison procedures (P-MCPs) to use with single group repeated measures designs. The following were studied through two Monte Carlo (MC) simulations: (1) The T procedure of J. W. Tukey (1953); (2) a modification of Tukey's T (G. Keppel, 1973); (3) the…

  9. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  10. Minimizing the Discrepancy between Simulated and Historical Failures in Turbine Engines: A Simulation-Based Optimization Method (Postprint)

    DTIC Science & Technology

    2015-01-01

    Procedure. The simulated annealing (SA) algorithm is a well-known local search metaheuristic used to address discrete, continuous, and multiobjective...design of experiments (DOE) to tune the parameters of the optimiza- tion algorithm . Section 5 shows the results of the case study. Finally, concluding... metaheuristic . The proposed method is broken down into two phases. Phase I consists of a Monte Carlo simulation to obtain the simulated percentage of failure

  11. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  12. Investigation of radiative interaction in laminar flows using Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is employed to study the radiative interactions in fully developed laminar flow between two parallel plates. Taking advantage of the characteristics of easy mathematical treatment of the MCM, a general numerical procedure is developed for nongray radiative interaction. The nongray model is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. To validate the Monte Carlo simulation for nongray radiation problems, the results of radiative dissipation from the MCM are compared with two available solutions for a given temperature profile between two plates. After this validation, the MCM is employed to solve the present physical problem and results for the bulk temperature are compared with available solutions. In general, good agreement is noted and reasons for some discrepancies in certain ranges of parameters are explained.

  13. Accurate Monte Carlo simulations for nozzle design, commissioning and quality assurance for a proton radiation therapy facility.

    PubMed

    Paganetti, H; Jiang, H; Lee, S Y; Kooy, H M

    2004-07-01

    Monte Carlo dosimetry calculations are essential methods in radiation therapy. To take full advantage of this tool, the beam delivery system has to be simulated in detail and the initial beam parameters have to be known accurately. The modeling of the beam delivery system itself opens various areas where Monte Carlo calculations prove extremely helpful, such as for design and commissioning of a therapy facility as well as for quality assurance verification. The gantry treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH) were modeled in detail using the GEANT4.5.2 Monte Carlo code. For this purpose, various novel solutions for simulating irregular shaped objects in the beam path, like contoured scatterers, patient apertures or patient compensators, were found. The four-dimensional, in time and space, simulation of moving parts, such as the modulator wheel, was implemented. Further, the appropriate physics models and cross sections for proton therapy applications were defined. We present comparisons between measured data and simulations. These show that by modeling the treatment nozzle with millimeter accuracy, it is possible to reproduce measured dose distributions with an accuracy in range and modulation width, in the case of a spread-out Bragg peak (SOBP), of better than 1 mm. The excellent agreement demonstrates that the simulations can even be used to generate beam data for commissioning treatment planning systems. The Monte Carlo nozzle model was used to study mechanical optimization in terms of scattered radiation and secondary radiation in the design of the nozzles. We present simulations on the neutron background. Further, the Monte Carlo calculations supported commissioning efforts in understanding the sensitivity of beam characteristics and how these influence the dose delivered. We present the sensitivity of dose distributions in water with respect to various beam parameters and geometrical misalignments. This allows the definition of tolerances for quality assurance and the design of quality assurance procedures.

  14. Probabilistic composite micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.

    1988-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.

  15. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    NASA Astrophysics Data System (ADS)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  16. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A probabilistic seismic risk assessment procedure for nuclear power plants: (I) Methodology

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.

    2011-01-01

    A new procedure for probabilistic seismic risk assessment of nuclear power plants (NPPs) is proposed. This procedure modifies the current procedures using tools developed recently for performance-based earthquake engineering of buildings. The proposed procedure uses (a) response-based fragility curves to represent the capacity of structural and nonstructural components of NPPs, (b) nonlinear response-history analysis to characterize the demands on those components, and (c) Monte Carlo simulations to determine the damage state of the components. The use of response-rather than ground-motion-based fragility curves enables the curves to be independent of seismic hazard and closely related to component capacity. The use of Monte Carlo procedure enables the correlation in the responses of components to be directly included in the risk assessment. An example of the methodology is presented in a companion paper to demonstrate its use and provide the technical basis for aspects of the methodology. ?? 2011 Published by Elsevier B.V.

  18. Monte Carlo method for photon heating using temperature-dependent optical properties.

    PubMed

    Slade, Adam Broadbent; Aguilar, Guillermo

    2015-02-01

    The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Nonglobal correlations in collider physics

    DOE PAGES

    Moult, Ian; Larkoski, Andrew J.

    2016-01-13

    Despite their importance for precision QCD calculations, correlations between in- and out-of-jet regions of phase space have never directly been observed. These so-called non-global effects are present generically whenever a collider physics measurement is not explicitly dependent on radiation throughout the entire phase space. In this paper, we introduce a novel procedure based on mutual information, which allows us to isolate these non-global correlations between measurements made in different regions of phase space. We study this procedure both analytically and in Monte Carlo simulations in the context of observables measured on hadronic final states produced in e+e- collisions, though itmore » is more widely applicable.The procedure exploits the sensitivity of soft radiation at large angles to non-global correlations, and we calculate these correlations through next-to-leading logarithmic accuracy. The bulk of these non-global correlations are found to be described in Monte Carlo simulation. They increase by the inclusion of non-perturbative effects, which we show can be incorporated in our calculation through the use of a model shape function. As a result, this procedure illuminates the source of non-global correlations and has connections more broadly to fundamental quantities in quantum field theory.« less

  20. Monte Carlo Simulation Tool Installation and Operation Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.

    2013-09-02

    This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection bymore » an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.« less

  1. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    DOE PAGES

    McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...

    2017-11-07

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less

  2. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less

  3. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.

    PubMed

    McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C

    2017-11-07

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  4. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.

    2017-11-01

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  5. Monte Carlo simulation of x-ray spectra in diagnostic radiology and mammography using MCNP4C

    NASA Astrophysics Data System (ADS)

    Ay, M. R.; Shahriari, M.; Sarkar, S.; Adib, M.; Zaidi, H.

    2004-11-01

    The general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C) was used for the simulation of x-ray spectra in diagnostic radiology and mammography. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic x-ray production were considered in this work. We focus on the simulation of various target/filter combinations to investigate the effect of tube voltage, target material and filter thickness on x-ray spectra in the diagnostic radiology and mammography energy ranges. The simulated x-ray spectra were compared with experimental measurements and spectra calculated by IPEM report number 78. In addition, the anode heel effect and off-axis x-ray spectra were assessed for different anode angles and target materials and the results were compared with EGS4-based Monte Carlo simulations and measured data. Quantitative evaluation of the differences between our Monte Carlo simulated and comparison spectra was performed using student's t-test statistical analysis. Generally, there is a good agreement between the simulated x-ray and comparison spectra, although there are systematic differences between the simulated and reference spectra especially in the K-characteristic x-rays intensity. Nevertheless, no statistically significant differences have been observed between IPEM spectra and the simulated spectra. It has been shown that the difference between MCNP simulated spectra and IPEM spectra in the low energy range is the result of the overestimation of characteristic photons following the normalization procedure. The transmission curves produced by MCNP4C have good agreement with the IPEM report especially for tube voltages of 50 kV and 80 kV. The systematic discrepancy for higher tube voltages is the result of systematic differences between the corresponding spectra.

  6. Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Ray, Laura R.

    1991-01-01

    A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.

  7. A probabilistic approach to composite micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.

    1988-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.

  8. Combined Monte Carlo/torsion-angle molecular dynamics for ensemble modeling of proteins, nucleic acids and carbohydrates.

    PubMed

    Zhang, Weihong; Howell, Steven C; Wright, David W; Heindel, Andrew; Qiu, Xiangyun; Chen, Jianhan; Curtis, Joseph E

    2017-05-01

    We describe a general method to use Monte Carlo simulation followed by torsion-angle molecular dynamics simulations to create ensembles of structures to model a wide variety of soft-matter biological systems. Our particular emphasis is focused on modeling low-resolution small-angle scattering and reflectivity structural data. We provide examples of this method applied to HIV-1 Gag protein and derived fragment proteins, TraI protein, linear B-DNA, a nucleosome core particle, and a glycosylated monoclonal antibody. This procedure will enable a large community of researchers to model low-resolution experimental data with greater accuracy by using robust physics based simulation and sampling methods which are a significant improvement over traditional methods used to interpret such data. Published by Elsevier Inc.

  9. A semi-grand canonical Monte Carlo simulation model for ion binding to ionizable surfaces: proton binding of carboxylated latex particles as a case study.

    PubMed

    Madurga, Sergio; Rey-Castro, Carlos; Pastor, Isabel; Vilaseca, Eudald; David, Calin; Garcés, Josep Lluís; Puy, Jaume; Mas, Francesc

    2011-11-14

    In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics

  10. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  11. Probabilistic simulation of uncertainties in composite uniaxial strengths

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Stock, T. A.

    1990-01-01

    Probabilistic composite micromechanics methods are developed that simulate uncertainties in unidirectional fiber composite strengths. These methods are in the form of computational procedures using composite mechanics with Monte Carlo simulation. The variables for which uncertainties are accounted include constituent strengths and their respective scatter. A graphite/epoxy unidirectional composite (ply) is studied to illustrate the procedure and its effectiveness to formally estimate the probable scatter in the composite uniaxial strengths. The results show that ply longitudinal tensile and compressive, transverse compressive and intralaminar shear strengths are not sensitive to single fiber anomalies (breaks, intergacial disbonds, matrix microcracks); however, the ply transverse tensile strength is.

  12. Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.

    PubMed

    Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G

    2010-04-16

    We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.

  13. Assessment of the dose distribution inside a cardiac cath lab using TLD measurements and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Baptista, M.; Teles, P.; Cardoso, G.; Vaz, P.

    2014-11-01

    Over the last decade, there was a substantial increase in the number of interventional cardiology procedures worldwide, and the corresponding ionizing radiation doses for both the medical staff and patients became a subject of concern. Interventional procedures in cardiology are normally very complex, resulting in long exposure times. Also, these interventions require the operator to work near the patient and, consequently, close to the primary X-ray beam. Moreover, due to the scattered radiation from the patient and the equipment, the medical staff is also exposed to a non-uniform radiation field that can lead to a significant exposure of sensitive body organs and tissues, such as the eye lens, the thyroid and the extremities. In order to better understand the spatial variation of the dose and dose rate distributions during an interventional cardiology procedure, the dose distribution around a C-arm fluoroscopic system, in operation in a cardiac cath lab at Portuguese Hospital, was estimated using both Monte Carlo (MC) simulations and dosimetric measurements. To model and simulate the cardiac cath lab, including the fluoroscopic equipment used to execute interventional procedures, the state-of-the-art MC radiation transport code MCNPX 2.7.0 was used. Subsequently, Thermo-Luminescent Detector (TLD) measurements were performed, in order to validate and support the simulation results obtained for the cath lab model. The preliminary results presented in this study reveal that the cardiac cath lab model was successfully validated, taking into account the good agreement between MC calculations and TLD measurements. The simulated results for the isodose curves related to the C-arm fluoroscopic system are also consistent with the dosimetric information provided by the equipment manufacturer (Siemens). The adequacy of the implemented computational model used to simulate complex procedures and map dose distributions around the operator and the medical staff is discussed, in view of the optimization principle (and the associated ALARA objective), one of the pillars of the international system of radiological protection.

  14. On predicting contamination levels of HALOE optics aboard UARS using direct simulation Monte Carlo

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1993-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flowfield and surface conditions and geometric orientations in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. Problems resolving species outgassing and vent flux rates that varied over many orders of magnitude were handled using species weighting factors. Results relating to contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface are presented, along with data related to code performance. Using procedures developed in standard contamination analyses, the cumulative level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated to be about 2700A.

  15. Conjuring the ghosts of missing children: a Monte Carlo simulation of reproductive restraint in Tokugawa Japan.

    PubMed

    Drixler, Fabian F

    2015-04-01

    This article quantifies the frequency of infanticide and abortion in one region of Japan by comparing observed fertility in a sample of 4.9 million person-years (1660-1872) with a Monte Carlo simulation of how many conceptions and births that population should have experienced. The simulation uses empirical values for the determinants of fertility from Eastern Japan itself as well as the best available studies of comparable populations. This procedure reveals that in several decades of the eighteenth century, at least 40% of pregnancies must have ended in either an induced abortion or an infanticide. In addition, the simulation results imply a rapid decline in the incidence of infanticide and abortion during the nineteenth century, when in a reverse fertility transition, this premodern family-planning regime gave way to a new age of large families.

  16. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  17. Modeling of body tissues for Monte Carlo simulation of radiotherapy treatments planned with conventional x-ray CT systems

    NASA Astrophysics Data System (ADS)

    Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru

    2016-07-01

    In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.

  18. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  19. Monte Carlo modeling of time-resolved fluorescence for depth-selective interrogation of layered tissue.

    PubMed

    Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A

    2011-11-01

    Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.

  20. Monte Carlo simulation of β γ coincidence system using plastic scintillators in 4π geometry

    NASA Astrophysics Data System (ADS)

    Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.

    2007-09-01

    A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.

  1. Adjoint acceleration of Monte Carlo simulations using TORT/MCNP coupling approach: a case study on the shielding improvement for the cyclotron room of the Buddhist Tzu Chi General Hospital.

    PubMed

    Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H

    2005-01-01

    Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.

  2. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  3. Effects of Average Signed Area Between Two Item Characteristic Curves and Test Purification Procedures on the DIF Detection via the Mantel-Haenszel Method

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Su, Ya-Hui

    2004-01-01

    In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…

  4. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  5. Management systems research study

    NASA Technical Reports Server (NTRS)

    Bruno, A. V.

    1975-01-01

    The development of a Monte Carlo simulation of procurement activities at the NASA Ames Research Center is described. Data cover: simulation of the procurement cycle, construction of a performance evaluation model, examination of employee development, procedures and review of evaluation criteria for divisional and individual performance evaluation. Determination of the influences and apparent impact of contract type and structure and development of a management control system for planning and controlling manpower requirements.

  6. Modeling scintillator and WLS fiber signals for fast Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Sánchez, F. A.; Medina-Tanco, G.

    2010-08-01

    In this work we present a fast, robust and flexible procedure to simulate electronic signals of scintillator units: plastic scintillator material embedded with a wavelength shifter optical fiber coupled to a photo-multiplier tube which, in turn, is plugged to a front-end electronic board. The simple rationale behind the simulation chain allows to adapt the procedure to a broad range of detectors based on that kind of units. We show that, in order to produce realistic results, the simulation parameters can be properly calibrated against laboratory measurements and used thereafter as input of the simulations. Simulated signals of atmospheric background cosmic ray muons are presented and their main features analyzed and validated using actual measured data. Conversely, for any given practical application, the present simulation scheme can be used to find an adequate combination of photo-multiplier tube and optical fiber at the prototyping stage.

  7. Bootstrapping Confidence Intervals for Robust Measures of Association.

    ERIC Educational Resources Information Center

    King, Jason E.

    A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…

  8. Can the prevalence of high blood drug concentrations in a population be estimated by analysing oral fluid? A study of tetrahydrocannabinol and amphetamine.

    PubMed

    Gjerde, Hallvard; Verstraete, Alain

    2010-02-25

    To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    NASA Astrophysics Data System (ADS)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  10. Computer simulation of surface and film processes

    NASA Technical Reports Server (NTRS)

    Tiller, W. A.

    1981-01-01

    A molecular dynamics technique based upon Lennard-Jones type pair interactions is used to investigate time-dependent as well as equilibrium properties. The case study deals with systems containing Si and O atoms. In this case a more involved potential energy function (PEF) is employed and the system is simulated via a Monte-Carlo procedure. This furnishes the equilibrium properties of the system at its interfaces and surfaces as well as in the bulk.

  11. Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1994-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.

  12. A method to reproduce alpha-particle spectra measured with semiconductor detectors.

    PubMed

    Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín

    2010-01-01

    A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less

  14. Signal-Detection Analyses of Conditional Discrimination and Delayed Matching-to-Sample Performance

    ERIC Educational Resources Information Center

    Alsop, Brent

    2004-01-01

    Quantitative analyses of stimulus control and reinforcer control in conditional discriminations and delayed matching-to-sample procedures often encounter a problem; it is not clear how to analyze data when subjects have not made errors. The present article examines two common methods for overcoming this problem. Monte Carlo simulations of…

  15. Estimating Uncertainty in Annual Forest Inventory Estimates

    Treesearch

    Ronald E. McRoberts; Veronica C. Lessard

    1999-01-01

    The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...

  16. MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides.

    PubMed

    Hendriks, P H G M; Maucec, M; de Meijer, R J

    2002-09-01

    gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of 40K and the series of 232Th and 238U are used to describe the source. A procedure is proposed which excludes the time-consuming electron tracking in less relevant areas of the geometry. The simulated gamma-ray spectra are benchmarked against laboratory data.

  17. Dynamics of Electronically Excited Species in Gaseous and Condensed Phase

    DTIC Science & Technology

    1989-12-01

    heatbath models of condensed phase helium, (3) development of models of condensed phase hydrogen and (4) development of simulation procedures for solution... Modelling and Computer Experiments 93 Introduction 93 Monte Carlo Simulations of Helium Bubble States 94 Heatbath Models f6r Helium Bubble States 114...ILLUSTRATIONS 1 He-He* potential energy curves and couplings for two-state model . 40 2 Cross section for He(1P) quenching to He( 3S) 42 3 Opacity

  18. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Feature Screening for Ultrahigh Dimensional Categorical Data with Applications.

    PubMed

    Huang, Danyang; Li, Runze; Wang, Hansheng

    2014-01-01

    Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies, and illustrate the proposed method by two empirical datasets.

  20. Costing the satellite power system

    NASA Technical Reports Server (NTRS)

    Hazelrigg, G. A., Jr.

    1978-01-01

    The paper presents a methodology for satellite power system costing, places approximate limits on the accuracy possible in cost estimates made at this time, and outlines the use of probabilistic cost information in support of the decision-making process. Reasons for using probabilistic costing or risk analysis procedures instead of standard deterministic costing procedures are considered. Components of cost, costing estimating relationships, grass roots costing, and risk analysis are discussed. Risk analysis using a Monte Carlo simulation model is used to estimate future costs.

  1. The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions.

    PubMed

    MacKenzie, Scott B; Podsakoff, Philip M; Jarvis, Cheryl Burke

    2005-07-01

    The purpose of this study was to review the distinction between formative- and reflective-indicator measurement models, articulate a set of criteria for deciding whether measures are formative or reflective, illustrate some commonly researched constructs that have formative indicators, empirically test the effects of measurement model misspecification using a Monte Carlo simulation, and recommend new scale development procedures for latent constructs with formative indicators. Results of the Monte Carlo simulation indicated that measurement model misspecification can inflate unstandardized structural parameter estimates by as much as 400% or deflate them by as much as 80% and lead to Type I or Type II errors of inference, depending on whether the exogenous or the endogenous latent construct is misspecified. Implications of this research are discussed. Copyright 2005 APA, all rights reserved.

  2. Modeling the Hyperdistribution of Item Parameters To Improve the Accuracy of Recovery in Estimation Procedures.

    ERIC Educational Resources Information Center

    Matthews-Lopez, Joy L.; Hombo, Catherine M.

    The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…

  3. Strength validation and fire endurance of glued-laminated timber beams

    Treesearch

    E. L. Schaffer; C. M. Marx; D. A. Bender; F. E. Woeste

    A previous paper presented a reliability-based model to predict the strength of glued-laminated timber beams at both room temperature and during fire exposure. This Monte Carlo simulation procedure generates strength and fire endurance (time-to-failure, TTF) data for glued- laminated beams that allow assessment of mean strength and TTF as well as their variability....

  4. Reexamining the Impact of Nonnormality in Two-Group Comparison Procedures

    ERIC Educational Resources Information Center

    Kang, Yoonjeong; Harring, Jeffrey R.; Li, Ming

    2015-01-01

    The authors performed a Monte Carlo simulation to empirically investigate the robustness and power of 4 methods in testing mean differences for 2 independent groups under conditions in which 2 populations may not demonstrate the same pattern of nonnormality. The approaches considered were the t test, Wilcoxon rank-sum test, Welch-James test with…

  5. Probabilistic Fiber Composite Micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, Thomas A.

    1996-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. The variables in which uncertainties are accounted for include constituent and void volume ratios, constituent elastic properties and strengths, and fiber misalignment. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material property variations induced by random changes expected at the material micro level. Regression results are presented to show the relative correlation between predictor and response variables in the study. These computational procedures make possible a formal description of anticipated random processes at the intra-ply level, and the related effects of these on composite properties.

  6. Ant colony algorithm implementation in electron and photon Monte Carlo transport: application to the commissioning of radiosurgery photon beams.

    PubMed

    García-Pareja, S; Galán, P; Manzano, F; Brualla, L; Lallena, A M

    2010-07-01

    In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within approximately 3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

  7. Thermodynamics and simulation of hard-sphere fluid and solid: Kinetic Monte Carlo method versus standard Metropolis scheme

    NASA Astrophysics Data System (ADS)

    Ustinov, E. A.

    2017-01-01

    The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.

  8. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  9. Proton Linear Energy Transfer measurement using Emulsion Cloud Chamber

    NASA Astrophysics Data System (ADS)

    Shin, Jae-ik; Park, Seyjoon; Kim, Haksoo; Kim, Meyoung; Jeong, Chiyoung; Cho, Sungkoo; Lim, Young Kyung; Shin, Dongho; Lee, Se Byeong; Morishima, Kunihiro; Naganawa, Naotaka; Sato, Osamu; Kwak, Jungwon; Kim, Sung Hyun; Cho, Jung Sook; Ahn, Jung Keun; Kim, Ji Hyun; Yoon, Chun Sil; Incerti, Sebastien

    2015-04-01

    This study proposes to determine the correlation between the Volume Pulse Height (VPH) measured by nuclear emulsion and Linear Energy Transfer (LET) calculated by Monte Carlo simulation based on Geant4. The nuclear emulsion was irradiated at the National Cancer Center (NCC) with a therapeutic proton beam and was installed at 5.2 m distance from the beam nozzle structure with various thicknesses of water-equivalent material (PMMA) blocks to position with specific positions along the Bragg curve. After the beam exposure and development of the emulsion films, the films were scanned by S-UTS developed in Nagoya University. The proton tracks in the scanned films were reconstructed using the 'NETSCAN' method. Through this procedure, the VPH can be derived from each reconstructed proton track at each position along the Bragg curve. The VPH value indicates the magnitude of energy loss in proton track. By comparison with the simulation results obtained using Geant4, we found the correlation between the LET calculated by Monte Carlo simulation and the VPH measured by the nuclear emulsion.

  10. Monte Carlo reference data sets for imaging research: Executive summary of the report of AAPM Research Committee Task Group 195.

    PubMed

    Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C

    2015-10-01

    The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.

  11. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  12. A Monte Carlo model for the internal dosimetry of choroid plexuses in nuclear medicine procedures.

    PubMed

    Amato, Ernesto; Cicone, Francesco; Auditore, Lucrezia; Baldari, Sergio; Prior, John O; Gnesin, Silvano

    2018-05-01

    Choroid plexuses are vascular structures located in the brain ventricles, showing specific uptake of some diagnostic and therapeutic radiopharmaceuticals currently under clinical investigation, such as integrin-binding arginine-glycine-aspartic acid (RGD) peptides. No specific geometry for choroid plexuses has been implemented in commercially available software for internal dosimetry. The aims of the present study were to assess the dependence of absorbed dose to the choroid plexuses on the organ geometry implemented in Monte Carlo simulations, and to propose an analytical model for the internal dosimetry of these structures for 18 F, 64 Cu, 67 Cu, 68 Ga, 90 Y, 131 I and 177 Lu nuclides. A GAMOS Monte Carlo simulation based on direct organ segmentation was taken as the gold standard to validate a second simulation based on a simplified geometrical model of the choroid plexuses. Both simulations were compared with the OLINDA/EXM sphere model. The gold standard and the simplified geometrical model gave similar dosimetry results (dose difference < 3.5%), indicating that the latter can be considered as a satisfactory approximation of the real geometry. In contrast, the sphere model systematically overestimated the absorbed dose compared to both Monte Carlo models (range: 4-50% dose difference), depending on the isotope energy and organ mass. Therefore, the simplified geometric model was adopted to introduce an analytical approach for choroid plexuses dosimetry in the mass range 2-16 g. The proposed model enables the estimation of the choroid plexuses dose by a simple bi-parametric function, once the organ mass and the residence time of the radiopharmaceutical under investigation are provided. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Aerodynamic characterization of the jet of an arc wind tunnel

    NASA Astrophysics Data System (ADS)

    Zuppardi, Gennaro; Esposito, Antonio

    2016-11-01

    It is well known that, due to a very aggressive environment and to a rather high rarefaction level of the arc wind tunnel jet, the measurement of fluid-dynamic parameters is difficult. For this reason, the aerodynamic characterization of the jet relies also on computer codes, simulating the operation of the tunnel. The present authors already used successfully such a kind of computing procedure for the tests in the arc wind tunnel (SPES) in Naples (Italy). In the present work an improved procedure is proposed. Like the former procedure also the present procedure relies on two codes working in tandem: 1) one-dimensional code simulating the inviscid and thermally not-conducting flow field in the torch, in the mix-chamber and in the nozzle up to the position, along the nozzle axis, of the continuum breakdown, 2) Direct Simulation Monte Carlo (DSMC) code simulating the flow field in the remaining part of the nozzle. In the present procedure, the DSMC simulation includes the simulation both in the nozzle and in the test chamber. An interesting problem, considered in this paper by means of the present procedure, has been the simulation of the flow field around a Pitot tube and of the related measurement of the stagnation pressure. The measured stagnation pressure, under rarefied conditions, may be even four times the theoretical value. Therefore a substantial correction has to be applied to the measured pressure. In the present paper a correction factor for the stagnation pressure measured in SPES is proposed. The analysis relies on twelve tests made in SPES.

  14. Monte Carlo Assessments of Absorbed Doses to the Hands of Radiopharmaceutical Workers Due to Photon Emitters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Dan; Eckerman, Keith F; Karagiannis, Harriet

    This paper describes the characterization of radiation doses to the hands of nuclear medicine technicians resulting from the handling of radiopharmaceuticals. Radiation monitoring using ring dosimeters indicates that finger dosimeters that are used to show compliance with applicable regulations may overestimate or underestimate radiation doses to the skin depending on the nature of the particular procedure and the radionuclide being handled. To better understand the parameters governing the absorbed dose distributions, a detailed model of the hands was created and used in Monte Carlo simulations of selected nuclear medicine procedures. Simulations of realistic configurations typical for workers handling radiopharmaceuticals weremore » performedfor a range of energies of the source photons. The lack of charged-particle equilibrium necessitated full photon-electron coupled transport calculations. The results show that the dose to different regions of the fingers can differ substantially from dosimeter readings when dosimeters are located at the base of the finger. We tried to identify consistent patterns that relate the actual dose to the dosimeter readings. These patterns depend on the specific work conditions and can be used to better assess the absorbed dose to different regions of the exposed skin.« less

  15. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  16. Simulation of beta radiator handling procedures in nuclear medicine by means of a movable hand phantom.

    PubMed

    Blunck, Ch; Becker, F; Urban, M

    2011-03-01

    In nuclear medicine therapies, people working with beta radiators such as (90)Y may be exposed to non-negligible partial body doses. For radiation protection, it is important to know the characteristics of the radiation field and possible dose exposures at relevant positions in the working area. Besides extensive measurements, simulations can provide these data. For this purpose, a movable hand phantom for Monte Carlo simulations was developed. Specific beta radiator handling scenarios can be modelled interactively with forward kinematics or automatically with an inverse kinematics procedure. As a first investigation, the dose distribution on a medical doctor's hand injecting a (90)Y solution was measured and simulated with the phantom. Modelling was done with the interactive method based on five consecutive frames from a video recorded during the injection. Owing to the use of only one camera, not each detail of the radiation scenario is visible in the video. In spite of systematic uncertainties, the measured and simulated dose values are in good agreement.

  17. A Monte Carlo risk assessment model for acrylamide formation in French fries.

    PubMed

    Cummins, Enda; Butler, Francis; Gormley, Ronan; Brunton, Nigel

    2009-10-01

    The objective of this study is to estimate the likely human exposure to the group 2a carcinogen, acrylamide, from French fries by Irish consumers by developing a quantitative risk assessment model using Monte Carlo simulation techniques. Various stages in the French-fry-making process were modeled from initial potato harvest, storage, and processing procedures. The model was developed in Microsoft Excel with the @Risk add-on package. The model was run for 10,000 iterations using Latin hypercube sampling. The simulated mean acrylamide level in French fries was calculated to be 317 microg/kg. It was found that females are exposed to smaller levels of acrylamide than males (mean exposure of 0.20 microg/kg bw/day and 0.27 microg/kg bw/day, respectively). Although the carcinogenic potency of acrylamide is not well known, the simulated probability of exceeding the average chronic human dietary intake of 1 microg/kg bw/day (as suggested by WHO) was 0.054 and 0.029 for males and females, respectively. A sensitivity analysis highlighted the importance of the selection of appropriate cultivars with known low reducing sugar levels for French fry production. Strict control of cooking conditions (correlation coefficient of 0.42 and 0.35 for frying time and temperature, respectively) and blanching procedures (correlation coefficient -0.25) were also found to be important in ensuring minimal acrylamide formation.

  18. The development, validation and application of a multi-detector CT (MDCT) scanner model for assessing organ doses to the pregnant patient and the fetus using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Gu, J.; Bednarz, B.; Caracappa, P. F.; Xu, X. G.

    2009-05-01

    The latest multiple-detector technologies have further increased the popularity of x-ray CT as a diagnostic imaging modality. There is a continuing need to assess the potential radiation risk associated with such rapidly evolving multi-detector CT (MDCT) modalities and scanning protocols. This need can be met by the use of CT source models that are integrated with patient computational phantoms for organ dose calculations. Based on this purpose, this work developed and validated an MDCT scanner using the Monte Carlo method, and meanwhile the pregnant patient phantoms were integrated into the MDCT scanner model for assessment of the dose to the fetus as well as doses to the organs or tissues of the pregnant patient phantom. A Monte Carlo code, MCNPX, was used to simulate the x-ray source including the energy spectrum, filter and scan trajectory. Detailed CT scanner components were specified using an iterative trial-and-error procedure for a GE LightSpeed CT scanner. The scanner model was validated by comparing simulated results against measured CTDI values and dose profiles reported in the literature. The source movement along the helical trajectory was simulated using the pitch of 0.9375 and 1.375, respectively. The validated scanner model was then integrated with phantoms of a pregnant patient in three different gestational periods to calculate organ doses. It was found that the dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. The paper also discusses how these fetal dose values can be used to evaluate imaging procedures and to assess risk using recommendations of the report from AAPM Task Group 36. This work demonstrates the ability of modeling and validating an MDCT scanner by the Monte Carlo method, as well as assessing fetal and organ doses by combining the MDCT scanner model and the pregnant patient phantom.

  19. Simulation of radiation damping in rings, using stepwise ray-tracing methods

    DOE PAGES

    Meot, F.

    2015-06-26

    The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider projectmore » at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.« less

  20. The difference between LSMC and replicating portfolio in insurance liability modeling.

    PubMed

    Pelsser, Antoon; Schweizer, Janina

    2016-01-01

    Solvency II requires insurers to calculate the 1-year value at risk of their balance sheet. This involves the valuation of the balance sheet in 1 year's time. As for insurance liabilities, closed-form solutions to their value are generally not available, insurers turn to estimation procedures. While pure Monte Carlo simulation set-ups are theoretically sound, they are often infeasible in practice. Therefore, approximation methods are exploited. Among these, least squares Monte Carlo (LSMC) and portfolio replication are prominent and widely applied in practice. In this paper, we show that, while both are variants of regression-based Monte Carlo methods, they differ in one significant aspect. While the replicating portfolio approach only contains an approximation error, which converges to zero in the limit, in LSMC a projection error is additionally present, which cannot be eliminated. It is revealed that the replicating portfolio technique enjoys numerous advantages and is therefore an attractive model choice.

  1. Transient thermal modeling of the nonscanning ERBE detector

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.

    1983-01-01

    A numerical model to predict the transient thermal response of the ERBE nonscanning wide field of view total radiometer channel was developed. The model, which uses Monte Carlo techniques to characterize the radiative component of heat transfer, is described and a listing of the computer program is provided. Application of the model to simulate the actual blackbody calibration procedure is discussed. The use of the model to establish a real time flight data interpretation strategy is recommended. Modification of the model to include a simulated Earth radiation source field and a filter dome is indicated.

  2. Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    1991-01-01

    A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.

  3. Calculation of Electronic Structure and Field Induced Magnetic Collapse in Ferroic Materials

    NASA Astrophysics Data System (ADS)

    Entel, Peter; Arróyave, Raymundo; Singh, Navdeep; Sokolovskiy, Vladimir V.; Buchelnikov, Vasiliy D.

    We have performed ab inito electronic structure calculations and Monte Carlo simulations of FeRh, Mn3GaC and Heusler intermetallics alloys such as Ni-Co-Cr-Mn-(Ga, In, Sn) which are of interest for solid refrigeration and energy systems, an emerging technology involving such solid-solid systems. The calculations reveal that the important magnetic phase diagrams of these alloys which show the magnetic collapse and allow predictions of the related magnetocaloric effect (MCE) which they exhibit at finite temperatures, can be obtained by ab inito and Monte Carlo computations in qualitatively good agreement with experimental data. This is a one-step procedure from theory to alloy design of ferroic functional devices.

  4. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  5. The Effects of Type I Error Rate and Power of the ANCOVA F-Test and Selected Alternatives under Non-Normality and Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Rheinheimer, David C.; Penfield, Douglas A.

    The performance of analysis of covariance (ANCOVA) and six selected competitors was examined under varying experimental conditions through Monte Carlo simulations. The six alternatives were: (1) Quade's procedure (D. Quade, 1967); (2) Puri and Sen's solution (M. Puri and P. Sen, 1969); (3) Burnett and Barr's rank difference scores (T. Burnett and…

  6. LHCf: A LHC detector for astroparticle physics

    DOE PAGES

    D'Alessandro, R.; Adriani, O.; Bonechi, L.; ...

    2007-03-01

    A number of extremely high energy cosmic ray events have been ob- served by various collaborations. The existence of such events, above the Greisen-Zatsepin-Kuzmin (GZK) cut-off has to be explained by a top-down scenario involving exotic physics. Yet the results reported depend heavily on Monte Carlo procedures. The LHCf experiment will provide important data to calibrate the codes used in air shower simulations.

  7. The Case for Use of Simple Difference Scores to Test the Significance of Differences in Mean Rates of Change in Controlled Repeated Measurements Designs

    ERIC Educational Resources Information Center

    Overall, John E.; Tonidandel, Scott

    2010-01-01

    A previous Monte Carlo study examined the relative powers of several simple and more complex procedures for testing the significance of difference in mean rates of change in a controlled, longitudinal, treatment evaluation study. Results revealed that the relative powers depended on the correlation structure of the simulated repeated measurements.…

  8. Monte Carlo investigation of backscatter factors for skin dose determination in interventional neuroradiology procedures

    NASA Astrophysics Data System (ADS)

    Omar, Artur; Benmakhlouf, Hamza; Marteinsdottir, Maria; Bujila, Robert; Nowik, Patrik; Andreo, Pedro

    2014-03-01

    Complex interventional and diagnostic x-ray angiographic (XA) procedures may yield patient skin doses exceeding the threshold for radiation induced skin injuries. Skin dose is conventionally determined by converting the incident air kerma free-in-air into entrance surface air kerma, a process that requires the use of backscatter factors. Subsequently, the entrance surface air kerma is converted into skin kerma using mass energy-absorption coefficient ratios tissue-to-air, which for the photon energies used in XA is identical to the skin dose. The purpose of this work was to investigate how the cranial bone affects backscatter factors for the dosimetry of interventional neuroradiology procedures. The PENELOPE Monte Carlo system was used to calculate backscatter factors at the entrance surface of a spherical and a cubic water phantom that includes a cranial bone layer. The simulations were performed for different clinical x-ray spectra, field sizes, and thicknesses of the bone layer. The results show a reduction of up to 15% when a cranial bone layer is included in the simulations, compared with conventional backscatter factors calculated for a homogeneous water phantom. The reduction increases for thicker bone layers, softer incident beam qualities, and larger field sizes, indicating that, due to the increased photoelectric crosssection of cranial bone compared to water, the bone layer acts primarily as an absorber of low-energy photons. For neurointerventional radiology procedures, backscatter factors calculated at the entrance surface of a water phantom containing a cranial bone layer increase the accuracy of the skin dose determination.

  9. Analysis of Naval Ammunition Stock Positioning

    DTIC Science & Technology

    2015-12-01

    model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17

  10. All-Particle Multiscale Computation of Hypersonic Rarefied Flow

    NASA Astrophysics Data System (ADS)

    Jun, E.; Burt, J. M.; Boyd, I. D.

    2011-05-01

    This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.

  11. Predicting a future lifetime through Box-Cox transformation.

    PubMed

    Yang, Z

    1999-09-01

    In predicting a future lifetime based on a sample of past lifetimes, the Box-Cox transformation method provides a simple and unified procedure that is shown in this article to meet or often outperform the corresponding frequentist solution in terms of coverage probability and average length of prediction intervals. Kullback-Leibler information and second-order asymptotic expansion are used to justify the Box-Cox procedure. Extensive Monte Carlo simulations are also performed to evaluate the small sample behavior of the procedure. Certain popular lifetime distributions, such as Weibull, inverse Gaussian and Birnbaum-Saunders are served as illustrative examples. One important advantage of the Box-Cox procedure lies in its easy extension to linear model predictions where the exact frequentist solutions are often not available.

  12. Backscatter factors and mass energy-absorption coefficient ratios for diagnostic radiology dosimetry

    NASA Astrophysics Data System (ADS)

    Benmakhlouf, Hamza; Bouchard, Hugo; Fransson, Annette; Andreo, Pedro

    2011-11-01

    Backscatter factors, B, and mass energy-absorption coefficient ratios, (μen/ρ)w, air, for the determination of the surface dose in diagnostic radiology were calculated using Monte Carlo simulations. The main purpose was to extend the range of available data to qualities used in modern x-ray techniques, particularly for interventional radiology. A comprehensive database for mono-energetic photons between 4 and 150 keV and different field sizes was created for a 15 cm thick water phantom. Backscattered spectra were calculated with the PENELOPE Monte Carlo system, scoring track-length fluence differential in energy with negligible statistical uncertainty; using the Monte Carlo computed spectra, B factors and (μen/ρ)w, air were then calculated numerically for each energy. Weighted averaging procedures were subsequently used to convolve incident clinical spectra with mono-energetic data. The method was benchmarked against full Monte Carlo calculations of incident clinical spectra obtaining differences within 0.3-0.6%. The technique used enables the calculation of B and (μen/ρ)w, air for any incident spectrum without further time-consuming Monte Carlo simulations. The adequacy of the extended dosimetry data to a broader range of clinical qualities than those currently available, while keeping consistency with existing data, was confirmed through detailed comparisons. Mono-energetic and spectra-averaged values were compared with published data, including those in ICRU Report 74 and IAEA TRS-457, finding average differences of 0.6%. Results are provided in comprehensive tables appropriated for clinical use. Additional qualities can easily be calculated using a designed GUI interface in conjunction with software to generate incident photon spectra.

  13. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  14. Probabilistic micromechanics for metal matrix composites

    NASA Astrophysics Data System (ADS)

    Engelstad, S. P.; Reddy, J. N.; Hopkins, Dale A.

    A probabilistic micromechanics-based nonlinear analysis procedure is developed to predict and quantify the variability in the properties of high temperature metal matrix composites. Monte Carlo simulation is used to model the probabilistic distributions of the constituent level properties including fiber, matrix, and interphase properties, volume and void ratios, strengths, fiber misalignment, and nonlinear empirical parameters. The procedure predicts the resultant ply properties and quantifies their statistical scatter. Graphite copper and Silicon Carbide Titanlum Aluminide (SCS-6 TI15) unidirectional plies are considered to demonstrate the predictive capabilities. The procedure is believed to have a high potential for use in material characterization and selection to precede and assist in experimental studies of new high temperature metal matrix composites.

  15. Numerical integration of detector response functions via Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  16. Numerical integration of detector response functions via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.

    2017-09-01

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.

  17. Numerical integration of detector response functions via Monte Carlo simulations

    DOE PAGES

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...

    2017-06-13

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  18. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  19. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  20. Assessing the impact of copy number variants on miRNA genes in autism by Monte Carlo simulation.

    PubMed

    Marrale, Maurizio; Albanese, Nadia Ninfa; Calì, Francesco; Romano, Valentino

    2014-01-01

    Autism Spectrum Disorders (ASDs) are childhood neurodevelopmental disorders with complex genetic origins. Previous studies have investigated the role of de novo Copy Number Variants (CNVs) and microRNAs as important but distinct etiological factors in ASD. We developed a novel computational procedure to assess the potential pathogenic role of microRNA genes overlapping de novo CNVs in ASD patients. Here we show that for chromosomes # 1, 2 and 22 the actual number of miRNA loci affected by de novo CNVs in patients was found significantly higher than that estimated by Monte Carlo simulation of random CNV events. Out of 24 miRNA genes over-represented in CNVs from these three chromosomes only hsa-mir-4436b-1 and hsa-mir-4436b-2 have not been detected in CNVs from non-autistic subjects as reported in the Database of Genomic Variants. Altogether the results reported in this study represent a first step towards a full understanding of how a dysregulated expression of the 24 miRNAs genes affect neurodevelopment in autism. We also propose that the procedure used in this study can be effectively applied to CNVs/miRNA genes association data in other genomic disorders beyond autism.

  1. Online gamma-camera imaging of 103Pd seeds (OGIPS) for permanent breast seed implantation

    NASA Astrophysics Data System (ADS)

    Ravi, Ananth; Caldwell, Curtis B.; Keller, Brian M.; Reznik, Alla; Pignol, Jean-Philippe

    2007-09-01

    Permanent brachytherapy seed implantation is being investigated as a mode of accelerated partial breast irradiation for early stage breast cancer patients. Currently, the seeds are poorly visualized during the procedure making it difficult to perform a real-time correction of the implantation if required. The objective was to determine if a customized gamma-camera can accurately localize the seeds during implantation. Monte Carlo simulations of a CZT based gamma-camera were used to assess whether images of suitable quality could be derived by detecting the 21 keV photons emitted from 74 MBq 103Pd brachytherapy seeds. A hexagonal parallel hole collimator with a hole length of 38 mm, hole diameter of 1.2 mm and 0.2 mm septa, was modeled. The design of the gamma-camera was evaluated on a realistic model of the breast and three layers of the seed distribution (55 seeds) based on a pre-implantation CT treatment plan. The Monte Carlo simulations showed that the gamma-camera was able to localize the seeds with a maximum error of 2.0 mm, using only two views and 20 s of imaging. A gamma-camera can potentially be used as an intra-procedural image guidance system for quality assurance for permanent breast seed implantation.

  2. Pediatric personalized CT-dosimetry Monte Carlo simulations, using computational phantoms

    NASA Astrophysics Data System (ADS)

    Papadimitroulas, P.; Kagadis, G. C.; Ploussi, A.; Kordolaimi, S.; Papamichail, D.; Karavasilis, E.; Syrgiamiotis, V.; Loudos, G.

    2015-09-01

    The last 40 years Monte Carlo (MC) simulations serve as a “gold standard” tool for a wide range of applications in the field of medical physics and tend to be essential in daily clinical practice. Regarding diagnostic imaging applications, such as computed tomography (CT), the assessment of deposited energy is of high interest, so as to better analyze the risks and the benefits of the procedure. The last few years a big effort is done towards personalized dosimetry, especially in pediatric applications. In the present study the GATE toolkit was used and computational pediatric phantoms have been modeled for the assessment of CT examinations dosimetry. The pediatric models used come from the XCAT and IT'IS series. The X-ray spectrum of a Brightspeed CT scanner was simulated and validated with experimental data. Specifically, a DCT-10 ionization chamber was irradiated twice using 120 kVp with 100 mAs and 200 mAs, for 1 sec in 1 central axial slice (thickness = 10mm). The absorbed dose was measured in air resulting in differences lower than 4% between the experimental and simulated data. The simulations were acquired using ˜1010 number of primaries in order to achieve low statistical uncertainties. Dose maps were also saved for quantification of the absorbed dose in several children critical organs during CT acquisition.

  3. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.

    PubMed

    Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F

    2015-02-01

    The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  4. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  5. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    PubMed Central

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835

  6. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    PubMed

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.

  7. Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2014-01-01

    In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.

  8. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  9. Size and habit evolution of PETN crystals - a lattice Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zepeda-Ruiz, L A; Maiti, A; Gee, R

    2006-02-28

    Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphologymore » as a function of the rate of particle addition relative to diffusion.« less

  10. Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2013-04-01

    We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.

  11. Direct simulation of high-vorticity gas flows

    NASA Technical Reports Server (NTRS)

    Bird, G. A.

    1987-01-01

    The computational limitations associated with the molecular dynamics (MD) method and the direct simulation Monte Carlo (DSMC) method are reviewed in the context of the computation of dilute gas flows with high vorticity. It is concluded that the MD method is generally limited to the dense gas case in which the molecular diameter is one-tenth or more of the mean free path. It is shown that the cell size in DSMC calculations should be small in comparison with the mean free path, and that this may be facilitated by a new subcell procedure for the selection of collision partners.

  12. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  13. Monte Carlo simulations in X-ray imaging

    NASA Astrophysics Data System (ADS)

    Giersch, Jürgen; Durst, Jürgen

    2008-06-01

    Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.

  14. Derivation of flood frequency curves in poorly gauged Mediterranean catchments using a simple stochastic hydrological rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Aronica, G. T.; Candela, A.

    2007-12-01

    SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.

  15. Simulations of singlet exciton diffusion in organic semiconductors: a review

    DOE PAGES

    Bjorgaard, Josiah A.; Kose, Muhammet Erkan

    2014-12-22

    Our review describes the various aspects of simulation strategies for exciton diffusion in condensed phase thin films of organic semiconductors. Several methods for calculating energy transfer rate constants are discussed along with procedures for how to account for energetic disorder. Exciton diffusion can be modelled by using kinetic Monte-Carlo methods or master equations. Recent literature on simulation efforts for estimating exciton diffusion lengths of various conjugated polymers and small molecules are introduced. Moreover, these studies are discussed in the context of the effects of morphology on exciton diffusion and the necessity of accurate treatment of disorder for comparison of simulationmore » results with those of experiment.« less

  16. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  17. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  18. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less

  19. Validation of the Monte Carlo simulator GATE for indium-111 imaging.

    PubMed

    Assié, K; Gardin, I; Véra, P; Buvat, I

    2005-07-07

    Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.

  20. A new cubic phantom for PET/CT dosimetry: Experimental and Monte Carlo characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belinato, Walmir; Silva, Rogerio M.V.; Souza, Divanizia N.

    In recent years, positron emission tomography (PET) associated with multidetector computed tomography (MDCT) has become a diagnostic technique widely disseminated to evaluate various malignant tumors and other diseases. However, during PET/CT examinations, the doses of ionizing radiation experienced by the internal organs of patients may be substantial. To study the doses involved in PET/CT procedures, a new cubic phantom of overlapping acrylic plates was developed and characterized. This phantom has a deposit for the placement of the fluorine-18 fluoro-2-deoxy-D-glucose ({sup 18}F-FDG) solution. There are also small holes near the faces for the insertion of optically stimulated luminescence dosimeters (OSLD). Themore » holes for OSLD are positioned at different distances from the {sup 18}F-FDG deposit. The experimental results were obtained in two PET/CT devices operating with different parameters. Differences in the absorbed doses were observed in OSLD measurements due to the non-orthogonal positioning of the detectors inside the phantom. This phantom was also evaluated using Monte Carlo simulations, with the MCNPX code. The phantom and the geometrical characteristics of the equipment were carefully modeled in the MCNPX code, in order to develop a new methodology form comparison of experimental and simulated results, as well as to allow the characterization of PET/CT equipments in Monte Carlo simulations. All results showed good agreement, proving that this new phantom may be applied for these experiments. (authors)« less

  1. Development and computer implementation of design/analysis techniques for multilayered composite structures. Probabilistic fiber composite micromechanics. M.S. Thesis, Mar. 1987 Final Report, 1 Sep. 1984 - 1 Oct. 1990

    NASA Technical Reports Server (NTRS)

    Stock, Thomas A.

    1995-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. The variables in which uncertainties are accounted for include constituent and void volume ratios, constituent elastic properties and strengths, and fiber misalignment. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material property variations induced by random changes expected at the material micro level. Regression results are presented to show the relative correlation between predictor and response variables in the study. These computational procedures make possible a formal description of anticipated random processes at the intraply level, and the related effects of these on composite properties.

  2. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  3. A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.

    PubMed

    Elliott, Alan C; Hynan, Linda S

    2011-04-01

    The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Simulation of unsteady flows by the DSMC macroscopic chemistry method

    NASA Astrophysics Data System (ADS)

    Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat

    2009-03-01

    In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.

  5. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    PubMed

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  6. Autocalibration of a one-dimensional hydrodynamic-ecological model (DYRESM 4.0-CAEDYM 3.1) using a Monte Carlo approach: simulations of hypoxic events in a polymictic lake

    NASA Astrophysics Data System (ADS)

    Luo, Liancong; Hamilton, David; Lan, Jia; McBride, Chris; Trolle, Dennis

    2018-03-01

    Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook autocalibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimizing the root-mean-square error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10 000 simulation iterations. The "optimal" temperature calibration produced a RMSE of 0.54 °C, Nr value of 0.99, and r value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 and 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr value was 0.75, and the r value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events from 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr value 0.62, and r value of 0.81, based on the available data set of 738 days. The autocalibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimization than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.

  7. Auto-calibration of a one-dimensional hydrodynamic-ecological model using a Monte Carlo approach: simulation of hypoxic events in a polymictic lake

    NASA Astrophysics Data System (ADS)

    Luo, L.

    2011-12-01

    Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.

  8. Theoretical modeling of a portable x-ray tube based KXRF system to measure lead in bone

    PubMed Central

    Specht, Aaron J; Weisskopf, Marc G; Nie, Linda Huiling

    2017-01-01

    Objective K-shell x-ray fluorescence (KXRF) techniques have been used to identify health effects resulting from exposure to metals for decades, but the equipment is bulky and requires significant maintenance and licensing procedures. A portable x-ray fluorescence (XRF) device was developed to overcome these disadvantages, but introduced a measurement dependency on soft tissue thickness. With recent advances to detector technology, an XRF device utilizing the advantages of both systems should be feasible. Approach In this study, we used Monte Carlo simulations to test the feasibility of an XRF device with a high-energy x-ray tube and detector operable at room temperature. Main Results We first validated the use of Monte Carlo N-particle transport code (MCNP) for x-ray tube simulations, and found good agreement between experimental and simulated results. Then, we optimized x-ray tube settings and found the detection limit of the high-energy x-ray tube based XRF device for bone lead measurements to be 6.91 μg g−1 bone mineral using a cadmium zinc telluride detector. Significance In conclusion, this study validated the use of MCNP in simulations of x-ray tube physics and XRF applications, and demonstrated the feasibility of a high-energy x-ray tube based XRF for metal exposure assessment. PMID:28169835

  9. Theoretical modeling of a portable x-ray tube based KXRF system to measure lead in bone.

    PubMed

    Specht, Aaron J; Weisskopf, Marc G; Nie, Linda Huiling

    2017-03-01

    K-shell x-ray fluorescence (KXRF) techniques have been used to identify health effects resulting from exposure to metals for decades, but the equipment is bulky and requires significant maintenance and licensing procedures. A portable x-ray fluorescence (XRF) device was developed to overcome these disadvantages, but introduced a measurement dependency on soft tissue thickness. With recent advances to detector technology, an XRF device utilizing the advantages of both systems should be feasible. In this study, we used Monte Carlo simulations to test the feasibility of an XRF device with a high-energy x-ray tube and detector operable at room temperature. We first validated the use of Monte Carlo N-particle transport code (MCNP) for x-ray tube simulations, and found good agreement between experimental and simulated results. Then, we optimized x-ray tube settings and found the detection limit of the high-energy x-ray tube based XRF device for bone lead measurements to be 6.91 µg g -1 bone mineral using a cadmium zinc telluride detector. In conclusion, this study validated the use of MCNP in simulations of x-ray tube physics and XRF applications, and demonstrated the feasibility of a high-energy x-ray tube based XRF for metal exposure assessment.

  10. An operational GLS model for hydrologic regression

    USGS Publications Warehouse

    Tasker, Gary D.; Stedinger, J.R.

    1989-01-01

    Recent Monte Carlo studies have documented the value of generalized least squares (GLS) procedures to estimate empirical relationships between streamflow statistics and physiographic basin characteristics. This paper presents a number of extensions of the GLS method that deal with realities and complexities of regional hydrologic data sets that were not addressed in the simulation studies. These extensions include: (1) a more realistic model of the underlying model errors; (2) smoothed estimates of cross correlation of flows; (3) procedures for including historical flow data; (4) diagnostic statistics describing leverage and influence for GLS regression; and (5) the formulation of a mathematical program for evaluating future gaging activities. ?? 1989.

  11. Comparison of different hydrological similarity measures to estimate flow quantiles

    NASA Astrophysics Data System (ADS)

    Rianna, M.; Ridolfi, E.; Napolitano, F.

    2017-07-01

    This paper aims to evaluate the influence of hydrological similarity measures on the definition of homogeneous regions. To this end, several attribute sets have been analyzed in the context of the Region of Influence (ROI) procedure. Several combinations of geomorphological, climatological, and geographical characteristics are also used to cluster potentially homogeneous regions. To verify the goodness of the resulting pooled sites, homogeneity tests arecarried out. Through a Monte Carlo simulation and a jack-knife procedure, flow quantiles areestimated for the regions effectively resulting as homogeneous. The analysis areperformed in both the so-called gauged and ungauged scenarios to analyze the effect of hydrological measures on flow quantiles estimation.

  12. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  13. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    NASA Astrophysics Data System (ADS)

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  14. A baseline-free procedure for transformation models under interval censorship.

    PubMed

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  15. Application of dynamic Monte Carlo technique in proton beam radiotherapy using Geant4 simulation toolkit

    NASA Astrophysics Data System (ADS)

    Guan, Fada

    Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.

  16. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piao, J; PLA 302 Hospital, Beijing; Xu, S

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less

  17. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  18. Slice sampling technique in Bayesian extreme of gold price modelling

    NASA Astrophysics Data System (ADS)

    Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham

    2013-09-01

    In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.

  19. A general approach to the electronic spin relaxation of Gd(III) complexes in solutions. Monte Carlo simulations beyond the Redfield limit

    NASA Astrophysics Data System (ADS)

    Rast, S.; Fries, P. H.; Belorizky, E.; Borel, A.; Helm, L.; Merbach, A. E.

    2001-10-01

    The time correlation functions of the electronic spin components of a metal ion without orbital degeneracy in solution are computed. The approach is based on the numerical solution of the time-dependent Schrödinger equation for a stochastic perturbing Hamiltonian which is simulated by a Monte Carlo algorithm using discrete time steps. The perturbing Hamiltonian is quite general, including the superposition of both the static mean crystal field contribution in the molecular frame and the usual transient ligand field term. The Hamiltonian of the static crystal field can involve the terms of all orders, which are invariant under the local group of the average geometry of the complex. In the laboratory frame, the random rotation of the complex is the only source of modulation of this Hamiltonian, whereas an additional Ornstein-Uhlenbeck process is needed to describe the time fluctuations of the Hamiltonian of the transient crystal field. A numerical procedure for computing the electronic paramagnetic resonance (EPR) spectra is proposed and discussed. For the [Gd(H2O)8]3+ octa-aqua ion and the [Gd(DOTA)(H2O)]- complex [DOTA=1,4,7,10-tetrakis(carboxymethyl)-1,4,7,10-tetraazacyclo dodecane] in water, the predictions of the Redfield relaxation theory are compared with those of the Monte Carlo approach. The Redfield approximation is shown to be accurate for all temperatures and for electronic resonance frequencies at and above X-band, justifying the previous interpretations of EPR spectra. At lower frequencies the transverse and longitudinal relaxation functions derived from the Redfield approximation display significantly faster decays than the corresponding simulated functions. The practical interest of this simulation approach is underlined.

  20. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  1. Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan

    2015-04-01

    An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs

  2. The two-dimensional Monte Carlo: a new methodologic paradigm for dose reconstruction for epidemiological studies.

    PubMed

    Simon, Steven L; Hoffman, F Owen; Hofer, Eduard

    2015-01-01

    Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.

  3. Test equality between two binary screening tests with a confirmatory procedure restricted on screen positives.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2015-01-01

    In studies of screening accuracy, we may commonly encounter the data in which a confirmatory procedure is administered to only those subjects with screen positives for ethical concerns. We focus our discussion on simultaneously testing equality of sensitivity and specificity between two binary screening tests when only subjects with screen positives receive the confirmatory procedure. We develop four asymptotic test procedures and one exact test procedure. We derive sample size calculation formula for a desired power of detecting a difference at a given nominal [Formula: see text]-level. We employ Monte Carlo simulation to evaluate the performance of these test procedures and the accuracy of the sample size calculation formula developed here in a variety of situations. Finally, we use the data obtained from a study of the prostate-specific-antigen test and digital rectal examination test on 949 Black men to illustrate the practical use of these test procedures and the sample size calculation formula.

  4. Monte Carlo simulations of time-of-flight PET with double-ended readout: calibration, coincidence resolving times and statistical lower bounds

    PubMed Central

    Derenzo, Stephen E

    2017-01-01

    This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values. PMID:28327464

  5. Monte Carlo simulations of time-of-flight PET with double-ended readout: calibration, coincidence resolving times and statistical lower bounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derenzo, Stephen E.

    Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less

  6. Monte Carlo simulations of time-of-flight PET with double-ended readout: calibration, coincidence resolving times and statistical lower bounds

    DOE PAGES

    Derenzo, Stephen E.

    2017-04-11

    Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less

  7. Verification of recursive probabilistic integration (RPI) method for fatigue life management using non-destructive inspections

    NASA Astrophysics Data System (ADS)

    Chen, Tzikang J.; Shiao, Michael

    2016-04-01

    This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.

  8. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less

  9. Study the sensitivity of dose calculation in prism treatment planning system using Monte Carlo simulation of 6 MeV electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardiansyah, D.; Haryanto, F.; Male, S.

    2014-09-30

    Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file ismore » used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.« less

  10. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less

  11. Self-evolving atomistic kinetic Monte Carlo simulations of defects in materials

    DOE PAGES

    Xu, Haixuan; Beland, Laurent K.; Stoller, Roger E.; ...

    2015-01-29

    The recent development of on-the-fly atomistic kinetic Monte Carlo methods has led to an increased amount attention on the methods and their corresponding capabilities and applications. In this review, the framework and current status of Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC) are discussed. SEAKMC particularly focuses on defect interaction and evolution with atomistic details without assuming potential defect migration/interaction mechanisms and energies. The strength and limitation of using an active volume, the key concept introduced in SEAKMC, are discussed. Potential criteria for characterizing an active volume are discussed and the influence of active volume size on saddle point energies ismore » illustrated. A procedure starting with a small active volume followed by larger active volumes was found to possess higher efficiency. Applications of SEAKMC, ranging from point defect diffusion, to complex interstitial cluster evolution, to helium interaction with tungsten surfaces, are summarized. A comparison of SEAKMC with molecular dynamics and conventional object kinetic Monte Carlo is demonstrated. Overall, SEAKMC is found to be complimentary to conventional molecular dynamics, especially when the harmonic approximation of transition state theory is accurate. However it is capable of reaching longer time scales than molecular dynamics and it can be used to systematically increase the accuracy of other methods such as object kinetic Monte Carlo. Furthermore, the challenges and potential development directions are also outlined.« less

  12. Real-time, ray casting-based scatter dose estimation for c-arm x-ray system.

    PubMed

    Alnewaini, Zaid; Langer, Eric; Schaber, Philipp; David, Matthias; Kretz, Dominik; Steil, Volker; Hesser, Jürgen

    2017-03-01

    Dosimetric control of staff exposure during interventional procedures under fluoroscopy is of high relevance. In this paper, a novel ray casting approximation of radiation transport is presented and the potential and limitation vs. a full Monte Carlo transport and dose measurements are discussed. The x-ray source of a Siemens Axiom Artix C-arm is modeled by a virtual source model using single Gaussian-shaped source. A Geant4-based Monte Carlo simulation determines the radiation transport from the source to compute scatter from the patient, the table, the ceiling and the floor. A phase space around these scatterers stores all photon information. Only those photons are traced that hit a surface of phantom that represents medical staff in the treatment room, no indirect scattering is considered; and a complete dose deposition on the surface is calculated. To evaluate the accuracy of the approximation, both experimental measurements using Thermoluminescent dosimeters (TLDs) and a Geant4-based Monte Carlo simulation of dose depositing for different tube angulations of the C-arm from cranial-caudal angle 0° and from LAO (Left Anterior Oblique) 0°-90° are realized. Since the measurements were performed on both sides of the table, using the symmetry of the setup, RAO (Right Anterior Oblique) measurements were not necessary. The Geant4-Monte Carlo simulation agreed within 3% with the measured data, which is within the accuracy of measurement and simulation. The ray casting approximation has been compared to TLD measurements and the achieved percentage difference was -7% for data from tube angulations 45°-90° and -29% from tube angulations 0°-45° on the side of the x-ray source, whereas on the opposite side of the x-ray source, the difference was -83.8% and -75%, respectively. Ray casting approximation for only LAO 90° was compared to a Monte Carlo simulation, where the percentage differences were between 0.5-3% on the side of the x-ray source where the highest dose usually detected was mainly from primary scattering (photons), whereas percentage differences between 2.8-20% are found on the side opposite to the x-ray source, where the lowest doses were detected. Dose calculation time of our approach was 0.85 seconds. The proposed approach yields a fast scatter dose estimation where we could run the Monte Carlo simulation only once for each x-ray tube angulation to get the Phase Space Files (PSF) for being used later by our ray casting approach to calculate the dose from only photons which will hit an movable elliptical cylinder shaped phantom and getting an output file for the positions of those hits to be used for visualizing the scatter dose propagation on the phantom surface. With dose calculation times of less than one second, we are saving much time compared to using a Monte Carlo simulation instead. With our approach, larger deviations occur only in regions with very low doses, whereas it provides a high precision in high-dose regions. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  13. TU-H-CAMPUS-IeP1-03: Comparison of Monte Carlo Simulation and Conversion Factor Based Method On Estimation of Effective Dose in Pediatric Patients Undergoing Interventional Cardiac Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, K; Wong, M; Ng, Y

    Purpose: Interventional cardiac procedures utilize frequent fluoroscopy and cineangiography, which impose considerable radiation risk to patients, especially pediatric patients. Accurate calculation of effective dose is important in order to estimate cancer risk over the rest of their lifetime. This study evaluates the difference in effective dose calculated by Monte Carlo simulation with those estimated by locally-derived conversion factors (CF-local) and by commonly quoted conversion factors from Karambatsakidou et al (CF-K). Methods: Effective dose (E),of 12 pediatric patients, age between 2.5–19 years old, who had undergone interventional cardiac procedures, were calculated using PCXMC-2.0 software. Tube spectrum, irradiation geometry, exposure parameters andmore » dose-area product (DAP) of each projection were included in the software calculation. Effective doses for each patient were also estimated by two Methods: 1) CF-local: conversion factor derived locally by generalizing results of 12 patients, multiplied by DAP of each patient gives E-local. 2) CF-K: selected factor from above-mentioned literature, multiplied by DAP of each patient gives E-K. Results: Mean of E, E-local and E-K were 16.01 mSv, 16.80 mSv and 22.25 mSv respectively. A deviation of −29.35% to +34.85% between E and E-local, while a greater deviation of −28.96% to +60.86% between E and EK were observed. E-K overestimated the effective dose for patients at age 7.5–19. Conclusion: Effective dose obtained by conversion factors is simple and quick to estimate radiation risk of pediatric patients. This study showed that estimation by CF-local may bear an error of 35% when compared with Monte Carlo calculation. If using conversion factors derived by other studies may result in an even greater error, of up to 60%, due to factors that are not catered for in the estimation, including patient size, projection angles, exposure parameters, tube filtration, etc. Users must be aware of these potential inaccuracies when simple conversion method is employed.« less

  14. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  15. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  16. Monte Carlo simulations in multi-detector CT (MDCT) for two PET/CT scanner models using MASH and FASH adult phantoms

    NASA Astrophysics Data System (ADS)

    Belinato, W.; Santos, W. S.; Paschoal, C. M. M.; Souza, D. N.

    2015-06-01

    The combination of positron emission tomography (PET) and computed tomography (CT) has been extensively used in oncology for diagnosis and staging of tumors, radiotherapy planning and follow-up of patients with cancer, as well as in cardiology and neurology. This study determines by the Monte Carlo method the internal organ dose deposition for computational phantoms created by multidetector CT (MDCT) beams of two PET/CT devices operating with different parameters. The different MDCT beam parameters were largely related to the total filtration that provides a beam energetic change inside the gantry. This parameter was determined experimentally with the Accu-Gold Radcal measurement system. The experimental values of the total filtration were included in the simulations of two MCNPX code scenarios. The absorbed organ doses obtained in MASH and FASH phantoms indicate that bowtie filter geometry and the energy of the X-ray beam have significant influence on the results, although this influence can be compensated by adjusting other variables such as the tube current-time product (mAs) and pitch during PET/CT procedures.

  17. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    PubMed

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  18. Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.

    PubMed

    Chatterjee, Abhijit; Voter, Arthur F

    2010-05-21

    We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.

  19. Coarse-Grained and Atomistic Modeling of Polyimides

    NASA Technical Reports Server (NTRS)

    Clancy, Thomas C.; Hinkley, Jeffrey A.

    2004-01-01

    A coarse-grained model for a set of three polyimide isomers is developed. Each polyimide is comprised of BPDA (3,3,4,4' - biphenyltetracarboxylic dianhydride) and one of three APB isomers: 1,3-bis(4-aminophenoxy)benzene, 1,4-bis(4-aminophenoxy)benzene or 1,3-bis(3-aminophenoxy)benzene. The coarse-grained model is constructed as a series of linked vectors following the contour of the polymer backbone. Beads located at the midpoint of each vector define centers for long range interaction energy between monomer subunits. A bulk simulation of each coarse-grained polyimide model is performed with a dynamic Monte Carlo procedure. These coarsegrained models are then reverse-mapped to fully atomistic models. The coarse-grained models show the expected trends in decreasing chain dimensions with increasing meta linkage in the APB section of the repeat unit, although these differences were minor due to the relatively short chains simulated here. Considerable differences are seen among the dynamic Monte Carlo properties of the three polyimide isomers. Decreasing relaxation times are seen with increasing meta linkage in the APB section of the repeat unit.

  20. Decision making on fitness landscapes

    NASA Astrophysics Data System (ADS)

    Arthur, R.; Sibani, P.

    2017-04-01

    We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et al. that we call the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures.

  1. Multivariate stochastic simulation with subjective multivariate normal distributions

    Treesearch

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  2. Lens of the eye dose calculation for neuro-interventional procedures and CBCT scans of the head

    NASA Astrophysics Data System (ADS)

    Xiong, Zhenyu; Vijayan, Sarath; Rana, Vijay; Jain, Amit; Rudin, Stephen; Bednarek, Daniel R.

    2016-03-01

    The aim of this work is to develop a method to calculate lens dose for fluoroscopically-guided neuro-interventional procedures and for CBCT scans of the head. EGSnrc Monte Carlo software is used to determine the dose to the lens of the eye for the projection geometry and exposure parameters used in these procedures. This information is provided by a digital CAN bus on the Toshiba Infinix C-Arm system which is saved in a log file by the real-time skin-dose tracking system (DTS) we previously developed. The x-ray beam spectra on this machine were simulated using BEAMnrc. These spectra were compared to those determined by SpekCalc and validated through measured percent-depth-dose (PDD) curves and half-value-layer (HVL) measurements. We simulated CBCT procedures in DOSXYZnrc for a CTDI head phantom and compared the surface dose distribution with that measured with Gafchromic film, and also for an SK150 head phantom and compared the lens dose with that measured with an ionization chamber. Both methods demonstrated good agreement. Organ dose calculated for a simulated neuro-interventional-procedure using DOSXYZnrc with the Zubal CT voxel phantom agreed within 10% with that calculated by PCXMC code for most organs. To calculate the lens dose in a neuro-interventional procedure, we developed a library of normalized lens dose values for different projection angles and kVp's. The total lens dose is then calculated by summing the values over all beam projections and can be included on the DTS report at the end of the procedure.

  3. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  4. Gray: a ray tracing-based Monte Carlo simulator for PET

    NASA Astrophysics Data System (ADS)

    Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.

    2018-05-01

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  5. Simulation Analysis of Computer-Controlled pressurization for Mixture Ratio Control

    NASA Technical Reports Server (NTRS)

    Alexander, Leslie A.; Bishop-Behel, Karen; Benfield, Michael P. J.; Kelley, Anthony; Woodcock, Gordon R.

    2005-01-01

    A procedural code (C++) simulation was developed to investigate potentials for mixture ratio control of pressure-fed spacecraft rocket propulsion systems by measuring propellant flows, tank liquid quantities, or both, and using feedback from these measurements to adjust propellant tank pressures to set the correct operating mixture ratio for minimum propellant residuals. The pressurization system eliminated mechanical regulators in favor of a computer-controlled, servo- driven throttling valve. We found that a quasi-steady state simulation (pressure and flow transients in the pressurization systems resulting from changes in flow control valve position are ignored) is adequate for this purpose. Monte-Carlo methods are used to obtain simulated statistics on propellant depletion. Mixture ratio control algorithms based on proportional-integral-differential (PID) controller methods were developed. These algorithms actually set target tank pressures; the tank pressures are controlled by another PID controller. Simulation indicates this approach can provide reductions in residual propellants.

  6. Hit rates and radiation doses to nuclei of bone lining cells from alpha-particle-emitting radionuclides

    NASA Technical Reports Server (NTRS)

    Polig, E.; Jee, W. S.; Kruglikov, I. L.

    1992-01-01

    Factors relating the local concentration of a bone-seeking alpha-particle emitter to the mean hit rate have been determined for nuclei of bone lining cells using a Monte Carlo procedure. Cell nuclei were approximated by oblate spheroids with dimensions and location taken from a previous histomorphometric study. The Monte Carlo simulation is applicable for planar and diffuse labels at plane or cylindrical bone surfaces. Additionally, the mean nuclear dose per hit, the dose mean per hit, the mean track segment length and its second moment, the percentage of stoppers, and the frequency distribution of the dose have been determined. Some basic features of the hit statistics for bone lining cells have been outlined, and the consequences of existing standards of radiation protection with regard to the hit frequency to cell nuclei are discussed.

  7. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  8. Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias

    2015-01-01

    Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.

  9. The use of Monte Carlo simulations for accurate dose determination with thermoluminescence dosemeters in radiation therapy beams.

    PubMed

    Mobit, P

    2002-01-01

    The energy responses of LiF-TLDs irradiated in megavoltage electron and photon beams have been determined experimentally by many investigators over the past 35 years but the results vary considerably. General cavity theory has been used to model some of the experimental findings but the predictions of these cavity theories differ from each other and from measurements by more than 13%. Recently, two groups or investigators using Monte Carlo simulations and careful experimental techniques showed that the energy response of 1 mm or 2 mm thick LiF-TLD irradiated by megavoltage photon and electron beams is not more than 5% less than unity for low-Z phantom materials like water or Perspex. However, when the depth of irradiation is significantly different from dmax and the TLD size is more than 5 mm, then the energy response is up to 12% less than unity for incident electron beams. Monte Carlo simulations of some of the experiments reported in the literature showed that some of the contradictory experimental results are reproducible with Monte Carlo simulations. Monte Carlo simulations show that the energy response of LiF-TLDs depends on the size of detector used in electron beams, the depth of irradiation and the incident electron energy. Other differences can be attributed to absolute dose determination and precision of the TL technique. Monte Carlo simulations have also been used to evaluate some of the published general cavity theories. The results show that some of the parameters used to evaluate Burlin's general cavity theory are wrong by factor of 3. Despite this, the estimation of the energy response for most clinical situations using Burlin's cavity equation agrees with Monte Carlo simulations within 1%.

  10. Monte Carlo simulation of aorta autofluorescence

    NASA Astrophysics Data System (ADS)

    Kuznetsova, A. A.; Pushkareva, A. E.

    2016-08-01

    Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.

  11. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  12. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, H; Ding, H; Ziemer, B

    Purpose: To investigate the feasibility of energy calibration and energy response characterization of a photon counting detector using x-ray fluorescence. Methods: A comprehensive Monte Carlo simulation study was done to investigate the influence of various geometric components on the x-ray fluorescence measurement. Different materials, sizes, and detection angles were simulated using Geant4 Application for Tomographic Emission (GATE) Monte Carlo package. Simulations were conducted using 100 kVp tungsten-anode spectra with 2 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The fluorescence material was placed 300 mm away from both themore » x-ray source and the detector. For angular dependence measurement, the distance was decreased to 30 mm to reduce the simulation time. Compound materials, containing silver, barium, gadolinium, hafnium, and gold in cylindrical shape, were simulated. The object size varied from 5 to 100 mm in diameter. The angular dependence of fluorescence and scatter were simulated from 20° to 170° with an incremental step of 10° to optimize the fluorescence to scatter ratio. Furthermore, the angular dependence was also experimentally measured using a spectrometer (X-123CdTe, Amptek Inc., MA) to validate the simulation results. Results: The detection angle between 120° to 160° resulted in more optimal x-ray fluorescence to scatter ratio. At a detection angle of 120°, the object size did not have a significant effect on the fluorescence to scatter ratio. The experimental results of fluorescence angular dependence are in good agreement with the simulation results. The Kα and Kβ peaks of five materials could be identified. Conclusion: The simulation results show that the x-ray fluorescence procedure has the potential to be used for detector energy calibration and detector response characteristics by using the optimal system geometry.« less

  14. Mixing of Isotactic and Syndiotactic Polypropylenes in the Melt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CLANCY,THOMAS C.; PUTZ,MATHIAS; WEINHOLD,JEFFREY D.

    2000-07-14

    The miscibility of polypropylene (PP) melts in which the chains differ only in stereochemical composition has been investigated by two different procedures. One approach used detailed local information from a Monte Carlo simulation of a single chain, and the other approach takes this information from a rotational isomeric state model devised decades ago, for another purpose. The first approach uses PRISM theory to deduce the intermolecular packing in the polymer blend, while the second approach uses a Monte Carlo simulation of a coarse-grained representation of independent chains, expressed on a high-coordination lattice. Both approaches find a positive energy change uponmore » mixing isotactic PP (iPP) and syndiotactic polypropylene (sPP) chains in the melt. This conclusion is qualitatively consistent with observations published recently by Muelhaupt and coworkers. The size of the energy chain on mixing is smaller in the MC/PRISM approach than in the RIS/MC simulation, with the smaller energy change being in better agreement with the experiment. The RIS/MC simulation finds no demixing for iPP and atactic polypropylene (aPP) in the melt, consistent with several experimental observations in the literature. The demixing of the iPP/sPP blend may arise from attractive interactions in the sPP melt that are disrupted when the sPP chains are diluted with aPP or iPP chains.« less

  15. Monte Carlo simulations of backscattering process in dislocation-containing SrTiO3 single crystal

    NASA Astrophysics Data System (ADS)

    Jozwik, P.; Sathish, N.; Nowicki, L.; Jagielski, J.; Turos, A.; Kovarik, L.; Arey, B.

    2014-05-01

    Studies of defects formation in crystals are of obvious importance in electronics, nuclear engineering and other disciplines where materials are exposed to different forms of irradiation. Rutherford Backscattering/Channeling (RBS/C) and Monte Carlo (MC) simulations are the most convenient tool for this purpose, as they allow one to determine several features of lattice defects: their type, concentration and damage accumulation kinetic. On the other hand various irradiation conditions can be efficiently modeled by ion irradiation method without leading to the radioactivity of the sample. Combination of ion irradiation with channeling experiment and MC simulations appears thus as a most versatile method in studies of radiation damage in materials. The paper presents the results on such a study performed on SrTiO3 (STO) single crystals irradiated with 320 keV Ar ions. The samples were analyzed also by using HRTEM as a complementary method which enables the measurement of geometrical parameters of crystal lattice deformation in the vicinity of dislocations. Once the parameters and their variations within the distance of several lattice constants from the dislocation core are known, they may be used in MC simulations for the quantitative determination of dislocation depth distribution profiles. The final outcome of the deconvolution procedure are cross-sections values calculated for two types of defects observed (RDA and dislocations).

  16. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  17. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  18. Fixed forced detection for fast SPECT Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.

    2018-03-01

    Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.

  19. Fixed forced detection for fast SPECT Monte-Carlo simulation.

    PubMed

    Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D

    2018-03-02

    Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.

  20. Monte Carlo simulation: Its status and future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murtha, J.A.

    1997-04-01

    Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less

  1. Direct Simulation Monte Carlo Calculations in Support of the Columbia Shuttle Orbiter Accident Investigation

    NASA Technical Reports Server (NTRS)

    Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.

    2003-01-01

    The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.

  2. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  3. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    PubMed

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  4. Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave

    NASA Astrophysics Data System (ADS)

    Yasuda, Shugo

    2017-02-01

    A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.

  5. SU-F-T-50: Evaluation of Monte Carlo Simulations Performance for Pediatric Brachytherapy Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatzipapas, C; Kagadis, G; Papadimitroulas, P

    Purpose: Pediatric tumors are generally treated with multi-modal procedures. Brachytherapy can be used with pediatric tumors, especially given that in this patient population low toxicity on normal tissues is critical as is the suppression of the probability for late malignancies. Our goal is to validate the GATE toolkit on realistic brachytherapy applications, and evaluate brachytherapy plans on pediatrics for accurate dosimetry on sensitive and critical organs of interest. Methods: The GATE Monte Carlo (MC) toolkit was used. Two High Dose Rate (HDR) 192Ir brachytherapy sources were simulated (Nucletron mHDR-v1 and Varian VS2000), and fully validated using the AAPM and ESTROmore » protocols. A realistic brachytherapy plan was also simulated using the XCAT anthropomorphic computational model .The simulated data were compared to the clinical dose points. Finally, a 14 years old girl with vaginal rhabdomyosarcoma was modelled based on clinical procedures for the calculation of the absorbed dose per organ. Results: The MC simulations resulted in accurate dosimetry in terms of dose rate constant (Λ), radial dose gL(r) and anisotropy function F(r,θ) for both sources.The simulations were executed using ∼1010 number of primaries resulting in statistical uncertainties lower than 2%.The differences between the theoretical values and the simulated ones ranged from 0.01% up to 3.3%, with the largest discrepancy (6%) being observed in the dose rate constant calculation.The simulated DVH using an adult female XCAT model was also compared to a clinical one resulting in differences smaller than 5%. Finally, a realistic pediatric brachytherapy simulation was performed to evaluate the absorbed dose per organ and to calculate DVH with respect to heterogeneities of the human anatomy. Conclusion: GATE is a reliable tool for brachytherapy simulations both for source modeling and for dosimetry in anthropomorphic voxelized models. Our project aims to evaluate a variety of pediatric brachytherapy schemes using a population of pediatric phantoms for several pathological cases. This study is part of a project that has received funding from the European Union Horizon2020 research and innovation programme under the MarieSklodowska-Curiegrantagreement.No691203.The results published in this study reflect only the authors view and the Research Executive Agency (REA) and the European Commission is not responsible for any use that may be madeof the information it contains.« less

  6. Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.

  7. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  8. Electron transport in furfural: dependence of the electron ranges on the cross sections and the energy loss distribution functions

    NASA Astrophysics Data System (ADS)

    Ellis-Gibbings, L.; Krupa, K.; Colmenares, R.; Blanco, F.; Muńoz, A.; Mendes, M.; Ferreira da Silva, F.; Limá Vieira, P.; Jones, D. B.; Brunger, M. J.; García, G.

    2016-09-01

    Recent theoretical and experimental studies have provided a complete set of differential and integral electron scattering cross section data from furfural over a broad energy range. The energy loss distribution functions have been determined in this study by averaging electron energy loss spectra for different incident energies and scattering angles. All these data have been used as input parameters for an event by event Monte Carlo simulation procedure to obtain the electron energy deposition patterns and electron ranges in liquid furfural. The dependence of these results on the input cross sections is then analysed to determine the uncertainty of the simulated values.

  9. A statistical methodology for estimating transport parameters: Theory and applications to one-dimensional advectivec-dispersive systems

    USGS Publications Warehouse

    Wagner, Brian J.; Gorelick, Steven M.

    1986-01-01

    A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.

  10. Filtered Mass Density Function for Design Simulation of High Speed Airbreathing Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Drozda, T. G.; Sheikhi, R. M.; Givi, Peyman

    2001-01-01

    The objective of this research is to develop and implement new methodology for large eddy simulation of (LES) of high-speed reacting turbulent flows. We have just completed two (2) years of Phase I of this research. This annual report provides a brief and up-to-date summary of our activities during the period: September 1, 2000 through August 31, 2001. In the work within the past year, a methodology termed "velocity-scalar filtered density function" (VSFDF) is developed and implemented for large eddy simulation (LES) of turbulent flows. In this methodology the effects of the unresolved subgrid scales (SGS) are taken into account by considering the joint probability density function (PDF) of all of the components of the velocity and scalar vectors. An exact transport equation is derived for the VSFDF in which the effects of the unresolved SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source terms appear in closed form. The remaining unclosed terms in this equation are modeled. A system of stochastic differential equations (SDEs) which yields statistically equivalent results to the modeled VSFDF transport equation is constructed. These SDEs are solved numerically by a Lagrangian Monte Carlo procedure. The consistency of the proposed SDEs and the convergence of the Monte Carlo solution are assessed by comparison with results obtained by an Eulerian LES procedure in which the corresponding transport equations for the first two SGS moments are solved. The unclosed SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source in the Eulerian LES are replaced by corresponding terms from VSFDF equation. The consistency of the results is then analyzed for a case of two dimensional mixing layer.

  11. Effect of battery longevity on costs and health outcomes associated with cardiac implantable electronic devices: a Markov model-based Monte Carlo simulation.

    PubMed

    Schmier, Jordana K; Lau, Edmund C; Patel, Jasmine D; Klenk, Juergen A; Greenspon, Arnold J

    2017-11-01

    The effects of device and patient characteristics on health and economic outcomes in patients with cardiac implantable electronic devices (CIEDs) are unclear. Modeling can estimate costs and outcomes for patients with CIEDs under a variety of scenarios, varying battery longevity, comorbidities, and care settings. The objective of this analysis was to compare changes in patient outcomes and payer costs attributable to increases in battery life of implantable cardiac defibrillators (ICDs) and cardiac resynchronization therapy defibrillators (CRT-D). We developed a Monte Carlo Markov model simulation to follow patients through primary implant, postoperative maintenance, generator replacement, and revision states. Patients were simulated in 3-month increments for 15 years or until death. Key variables included Charlson Comorbidity Index, CIED type, legacy versus extended battery longevity, mortality rates (procedure and all-cause), infection and non-infectious complication rates, and care settings. Costs included procedure-related (facility and professional), maintenance, and infections and non-infectious complications, all derived from Medicare data (2004-2014, 5% sample). Outcomes included counts of battery replacements, revisions, infections and non-infectious complications, and discounted (3%) costs and life years. An increase in battery longevity in ICDs yielded reductions in numbers of revisions (by 23%), battery changes (by 44%), infections (by 23%), non-infectious complications (by 10%), and total costs per patient (by 9%). Analogous reductions for CRT-Ds were 23% (revisions), 32% (battery changes), 22% (infections), 8% (complications), and 10% (costs). Based on modeling results, as battery longevity increases, patients experience fewer adverse outcomes and healthcare costs are reduced. Understanding the magnitude of the cost benefit of extended battery life can inform budgeting and planning decisions by healthcare providers and insurers.

  12. Feature Screening in Ultrahigh Dimensional Cox's Model.

    PubMed

    Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne

    Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.

  13. SU-E-T-481: Dosimetric Effects of Tissue Heterogeneity in Proton Therapy: Monte Carlo Simulation and Experimental Study Using Animal Tissue Phantoms.

    PubMed

    Liu, Y; Zheng, Y

    2012-06-01

    Accurate determination of proton dosimetric effect for tissue heterogeneity is critical in proton therapy. Proton beams have finite range and consequently tissue heterogeneity plays a more critical role in proton therapy. The purpose of this study is to investigate the tissue heterogeneity effect in proton dosimetry based on anatomical-based Monte Carlo simulation using animal tissues. Animal tissues including a pig head and beef bulk were used in this study. Both pig head and beef were scanned using a GE CT scanner with 1.25 mm slice thickness. A treatment plan was created, using the CMS XiO treatment planning system (TPS) with a single proton spread-out-Bragg-peak beam (SOBP). Radiochromic films were placed at the distal falloff region. Image guidance was used to align the phantom before proton beams were delivered according to the treatment plan. The same two CT sets were converted to Monte Carlo simulation model. The Monte Carlo simulated dose calculations with/without tissue omposition were compared to TPS calculations and measurements. Based on the preliminary comparison, at the center of SOBP plane, the Monte Carlo simulation dose without tissue composition agreed generally well with TPS calculation. In the distal falloff region, the dose difference was large, and about 2 mm isodose line shift was observed with the consideration of tissue composition. The detailed comparison of dose distributions between Monte Carlo simulation, TPS calculations and measurements is underway. Accurate proton dose calculations are challenging in proton treatment planning for heterogeneous tissues. Tissue heterogeneity and tissue composition may lead to isodose line shifts up to a few millimeters in the distal falloff region. By simulating detailed particle transport and energy deposition, Monte Carlo simulations provide a verification method in proton dose calculation where inhomogeneous tissues are present. © 2012 American Association of Physicists in Medicine.

  14. Self-learning Monte Carlo method

    DOE PAGES

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...

    2017-01-04

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less

  15. Brownian dynamics and dynamic Monte Carlo simulations of isotropic and liquid crystal phases of anisotropic colloidal particles: a comparative study.

    PubMed

    Patti, Alessandro; Cuetos, Alejandro

    2012-07-01

    We report on the diffusion of purely repulsive and freely rotating colloidal rods in the isotropic, nematic, and smectic liquid crystal phases to probe the agreement between Brownian and Monte Carlo dynamics under the most general conditions. By properly rescaling the Monte Carlo time step, being related to any elementary move via the corresponding self-diffusion coefficient, with the acceptance rate of simultaneous trial displacements and rotations, we demonstrate the existence of a unique Monte Carlo time scale that allows for a direct comparison between Monte Carlo and Brownian dynamics simulations. To estimate the validity of our theoretical approach, we compare the mean square displacement of rods, their orientational autocorrelation function, and the self-intermediate scattering function, as obtained from Brownian dynamics and Monte Carlo simulations. The agreement between the results of these two approaches, even under the condition of heterogeneous dynamics generally observed in liquid crystalline phases, is excellent.

  16. Planck 2015 results. VI. LFI mapmaking

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chary, R.-R.; Christensen, P. R.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Pierpaoli, E.; Pietrobon, D.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    This paper describes the mapmaking procedure applied to Planck Low Frequency Instrument (LFI) data. The mapmaking step takes as input the calibrated timelines and pointing information. The main products are sky maps of I, Q, and U Stokes components. For the first time, we present polarization maps at LFI frequencies. The mapmaking algorithm is based on a destriping technique, which is enhanced with a noise prior. The Galactic region is masked to reduce errors arising from bandpass mismatch and high signal gradients. We apply horn-uniform radiometer weights to reduce the effects of beam-shape mismatch. The algorithm is the same as used for the 2013 release, apart from small changes in parameter settings. We validate the procedure through simulations. Special emphasis is put on the control of systematics, which is particularly important for accurate polarization analysis. We also produce low-resolution versions of the maps and corresponding noise covariance matrices. These serve as input in later analysis steps and parameter estimation. The noise covariance matrices are validated through noise Monte Carlo simulations. The residual noise in the map products is characterized through analysis of half-ring maps, noise covariance matrices, and simulations.

  17. Extension of the quantum-kinetic model to lunar and Mars return physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liechty, D. S.; Lewis, M. J.

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high-mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. A recently introduced molecular-level chemistry model, the quantum-kinetic, or Q-K, model that predicts reaction rates for gases in thermal equilibrium and non-equilibrium using only kinetic theory and fundamental molecular properties, is extended in the current work to include electronic energy level transitions and reactions involving charged particles. Like the Q-K procedures for neutral species chemical reactions, these new models are phenomenological procedures that aimmore » to reproduce the reaction/transition rates but do not necessarily capture the exact physics. These engineering models are necessarily efficient due to the requirement to compute billions of simulated collisions in direct simulation Monte Carlo (DSMC) simulations. The new models are shown to generally agree within the spread of reported transition and reaction rates from the literature for near equilibrium conditions.« less

  18. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  19. A Bayesian Multinomial Probit MODEL FOR THE ANALYSIS OF PANEL CHOICE DATA.

    PubMed

    Fong, Duncan K H; Kim, Sunghoon; Chen, Zhe; DeSarbo, Wayne S

    2016-03-01

    A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.

  20. Monte Carlo Simulation of Microscopic Stock Market Models

    NASA Astrophysics Data System (ADS)

    Stauffer, Dietrich

    Computer simulations with random numbers, that is, Monte Carlo methods, have been considerably applied in recent years to model the fluctuations of stock market or currency exchange rates. Here we concentrate on the percolation model of Cont and Bouchaud, to simulate, not to predict, the market behavior.

  1. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  2. Gray: a ray tracing-based Monte Carlo simulator for PET.

    PubMed

    Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S

    2018-05-21

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  3. Reconstruction of organ dose for external radiotherapy patients in retrospective epidemiologic studies

    NASA Astrophysics Data System (ADS)

    Lee, Choonik; Jung, Jae Won; Pelletier, Christopher; Pyakuryal, Anil; Lamart, Stephanie; Kim, Jong Oh; Lee, Choonsik

    2015-03-01

    Organ dose estimation for retrospective epidemiological studies of late effects in radiotherapy patients involves two challenges: radiological images to represent patient anatomy are not usually available for patient cohorts who were treated years ago, and efficient dose reconstruction methods for large-scale patient cohorts are not well established. In the current study, we developed methods to reconstruct organ doses for radiotherapy patients by using a series of computational human phantoms coupled with a commercial treatment planning system (TPS) and a radiotherapy-dedicated Monte Carlo transport code, and performed illustrative dose calculations. First, we developed methods to convert the anatomy and organ contours of the pediatric and adult hybrid computational phantom series to Digital Imaging and Communications in Medicine (DICOM)-image and DICOM-structure files, respectively. The resulting DICOM files were imported to a commercial TPS for simulating radiotherapy and dose calculation for in-field organs. The conversion process was validated by comparing electron densities relative to water and organ volumes between the hybrid phantoms and the DICOM files imported in TPS, which showed agreements within 0.1 and 2%, respectively. Second, we developed a procedure to transfer DICOM-RT files generated from the TPS directly to a Monte Carlo transport code, x-ray Voxel Monte Carlo (XVMC) for more accurate dose calculations. Third, to illustrate the performance of the established methods, we simulated a whole brain treatment for the 10 year-old male phantom and a prostate treatment for the adult male phantom. Radiation doses to selected organs were calculated using the TPS and XVMC, and compared to each other. Organ average doses from the two methods matched within 7%, whereas maximum and minimum point doses differed up to 45%. The dosimetry methods and procedures established in this study will be useful for the reconstruction of organ dose to support retrospective epidemiological studies of late effects in radiotherapy patients.

  4. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  5. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  6. Building Process Improvement Business Cases Using Bayesian Belief Networks and Monte Carlo Simulation

    DTIC Science & Technology

    2009-07-01

    simulation. The pilot described in this paper used this two-step approach within a Define, Measure, Analyze, Improve, and Control ( DMAIC ) framework to...networks, BBN, Monte Carlo simulation, DMAIC , Six Sigma, business case 15. NUMBER OF PAGES 35 16. PRICE CODE 17. SECURITY CLASSIFICATION OF

  7. Simulation of a complete X-ray digital radiographic system for industrial applications.

    PubMed

    Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H

    2018-05-19

    Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Discrete Fractional Component Monte Carlo Simulation Study of Dilute Nonionic Surfactants at the Air-Water Interface.

    PubMed

    Yoo, Brian; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Jusufi, Arben; Maginn, Edward J

    2017-09-26

    We present a newly developed Monte Carlo scheme to predict bulk surfactant concentrations and surface tensions at the air-water interface for various surfactant interfacial coverages. Since the concentration regimes of these systems of interest are typically very dilute (≪10 -5 mol. frac.), Monte Carlo simulations with the use of insertion/deletion moves can provide the ability to overcome finite system size limitations that often prohibit the use of modern molecular simulation techniques. In performing these simulations, we use the discrete fractional component Monte Carlo (DFCMC) method in the Gibbs ensemble framework, which allows us to separate the bulk and air-water interface into two separate boxes and efficiently swap tetraethylene glycol surfactants C 10 E 4 between boxes. Combining this move with preferential translations, volume biased insertions, and Wang-Landau biasing vastly enhances sampling and helps overcome the classical "insertion problem", often encountered in non-lattice Monte Carlo simulations. We demonstrate that this methodology is both consistent with the original molecular thermodynamic theory (MTT) of Blankschtein and co-workers, as well as their recently modified theory (MD/MTT), which incorporates the results of surfactant infinite dilution transfer free energies and surface tension calculations obtained from molecular dynamics simulations.

  9. Free energy and phase equilibria for the restricted primitive model of ionic fluids from Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.

    1994-07-01

    In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.

  10. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    PubMed Central

    Pratx, Guillem; Xing, Lei

    2011-01-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  11. Reconstruction of Orion Engineering Development Unit (EDU) Parachute Inflation Loads

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2013-01-01

    The process of reconstructing inflation loads of Capsule Parachute Assembly System (CPAS) has been updated as the program transitioned to testing Engineering Development Unit (EDU) hardware. The equations used to reduce the test data have been re-derived based on the same physical assumptions made by simulations. Due to instrumentation challenges, individual parachute loads are determined from complementary accelerometer and load cell measurements. Cluster inflations are now simulated by modeling each parachute individually to better represent different inflation times and non-synchronous disreefing. The reconstruction procedure is tailored to either infinite mass or finite mass events based on measurable characteristics from the test data. Inflation parameters are determined from an automated optimization routine to reduce subjectivity. Infinite mass inflation parameters have been re-defined to avoid unrealistic interactions in Monte Carlo simulations. Sample cases demonstrate how best-fit inflation parameters are used to generate simulated drag areas and loads which favorably agree with test data.

  12. Diffusion of a Concentrated Lattice Gas in a Regular Comb Structure

    NASA Astrophysics Data System (ADS)

    Garcia, Paul; Wentworth, Christopher

    2008-10-01

    Understanding diffusion in constrained geometries is of interest in a variety of contexts as varied as mass transport in disordered solids, such as a percolation cluster, or intercellular transport of water molecules in biological tissue. In this investigation we explore diffusion in a very simple constrained geometry: a comb-like structure involving a one-dimensional backbone of lattice sites with regularly spaced teeth of fixed length. The model considered assumes a fixed concentration of diffusing particles can hop to nearest-neighbor sites only, and they do not interact with each other except that double occupancy is not allowed. The system is simulated using a Monte Carlo simulation procedure. The mean-square displacement of a tagged particle is calculated from the simulation as a function of time. The simulation shows normal diffusive behavior after a period of anomalous diffusion that increases as the tooth size increases.

  13. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-01-01

    Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  14. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.

  15. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  16. A dental public health approach based on computational mathematics: Monte Carlo simulation of childhood dental decay.

    PubMed

    Tennant, Marc; Kruger, Estie

    2013-02-01

    This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT<>0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated. © 2013 FDI World Dental Federation.

  17. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  18. Data re-arranging techniques leading to proper variable selections in high energy physics

    NASA Astrophysics Data System (ADS)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  19. Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program

    ERIC Educational Resources Information Center

    Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.

    2004-01-01

    The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…

  20. Novel low-kVp beamlet system for choroidal melanoma

    PubMed Central

    Esquivel, Carlos; Fuller, Clifton D; Waggener, Robert G; Wong, Adrian; Meltz, Martin; Blough, Melissa; Eng, Tony Y; Thomas, Charles R

    2006-01-01

    Background Treatment of choroidal melanoma with radiation often involves placement of customized brachytherapy eye-plaques. However, the dosimetric properties inherent in source-based radiotherapy preclude facile dose optimization to critical ocular structures. Consequently, we have constructed a novel system for utilizing small beam low-energy radiation delivery, the Beamlet Low-kVp X-ray, or "BLOKX" system. This technique relies on an isocentric rotational approach to deliver dose to target volumes within the eye, while potentially sparing normal structures. Methods Monte Carlo N-Particle (MCNP) transport code version 5.0(14) was used to simulate photon interaction with normal and tumor tissues within modeled right eye phantoms. Five modeled dome-shaped tumors with a diameter and apical height of 8 mm and 6 mm, respectively, were simulated distinct positions with respect to the macula iteratively. A single fixed 9 × 9 mm2 beamlet, and a comparison COMS protocol plaque containing eight I-125 seeds (apparent activity of 8 mCi) placed on the scleral surface of the eye adjacent to the tumor, were utilized to determine dosimetric parameters at tumor and adjacent tissues. After MCNP simulation, comparison of dose distribution at each of the 5 tumor positions for each modality (BLOKX vs. eye-plaque) was performed. Results Tumor-base doses ranged from 87.1–102.8 Gy for the BLOKX procedure, and from 335.3–338.6 Gy for the eye-plaque procedure. A reduction of dose of at least 69% to tumor base was noted when using the BLOKX. The BLOKX technique showed a significant reduction of dose, 89.8%, to the macula compared to the episcleral plaque. A minimum 71.0 % decrease in dose to the optic nerve occurred when the BLOKX was used. Conclusion The BLOKX technique allows more favorable dose distribution in comparison to standard COMS brachytherapy, as simulated using a Monte Carlo iterative mathematical modeling. Future series to determine clinical utility of such an approach are warranted. PMID:16965624

  1. Evaluation of the UF/NCI hybrid computational phantoms for use in organ dosimetry of pediatric patients undergoing fluoroscopically guided cardiac procedures

    NASA Astrophysics Data System (ADS)

    Marshall, Emily L.; Borrego, David; Tran, Trung; Fudge, James C.; Bolch, Wesley E.

    2018-03-01

    Epidemiologic data demonstrate that pediatric patients face a higher relative risk of radiation induced cancers than their adult counterparts at equivalent exposures. Infants and children with congenital heart defects are a critical patient population exposed to ionizing radiation during life-saving procedures. These patients will likely incur numerous procedures throughout their lifespan, each time increasing their cumulative radiation absorbed dose. As continued improvements in long-term prognosis of congenital heart defect patients is achieved, a better understanding of organ radiation dose following treatment becomes increasingly vital. Dosimetry of these patients can be accomplished using Monte Carlo radiation transport simulations, coupled with modern anatomical patient models. The aim of this study was to evaluate the performance of the University of Florida/National Cancer Institute (UF/NCI) pediatric hybrid computational phantom library for organ dose assessment of patients that have undergone fluoroscopically guided cardiac catheterizations. In this study, two types of simulations were modeled. A dose assessment was performed on 29 patient-specific voxel phantoms (taken as representing the patient’s true anatomy), height/weight-matched hybrid library phantoms, and age-matched reference phantoms. Two exposure studies were conducted for each phantom type. First, a parametric study was constructed by the attending pediatric interventional cardiologist at the University of Florida to model the range of parameters seen clinically. Second, four clinical cardiac procedures were simulated based upon internal logfiles captured by a Toshiba Infinix-i Cardiac Bi-Plane fluoroscopic unit. Performance of the phantom library was quantified by computing both the percent difference in individual organ doses, as well as the organ dose root mean square values for overall phantom assessment between the matched phantoms (UF/NCI library or reference) and the patient-specific phantoms. The UF/NCI hybrid phantoms performed at percent differences of between 15% and 30% for the parametric set of irradiation events. Among internal logfile reconstructed procedures, the UF/NCI hybrid phantoms performed with RMS organ dose values between 7% and 29%. Percent improvement in organ dosimetry via the use of hybrid library phantoms over the reference phantoms ranged from 6.6% to 93%. The use of a hybrid phantom library, Monte Carlo radiation transport methods, and clinical information on irradiation events provide a means for tracking organ dose in these radiosensitive patients undergoing fluoroscopically guided cardiac procedures. This work was supported by Advanced Laboratory for Radiation Dosimetry Studies, University of Florida, American Association of University Women, National Cancer Institute Grant 1F31 CA159464.

  2. T-Opt: A 3D Monte Carlo simulation for light delivery design in photodynamic therapy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Honda, Norihiro; Hazama, Hisanao; Awazu, Kunio

    2017-02-01

    The interstitial photodynamic therapy (iPDT) with 5-aminolevulinic acid (5-ALA) is a safe and feasible treatment modality of malignant glioblastoma. In order to cover the tumour volume, the exact position of the light diffusers within the lesion is needed to decide precisely. The aim of this study is the development of evaluation method of treatment volume with 3D Monte Carlo simulation for iPDT using 5-ALA. Monte Carlo simulations of fluence rate were performed using the optical properties of the brain tissue infiltrated by tumor cells and normal tissue. 3-D Monte Carlo simulation was used to calculate the position of the light diffusers within the lesion and light transport. The fluence rate near the diffuser was maximum and decreased exponentially with distance. The simulation can calculate the amount of singlet oxygen generated by PDT. In order to increase the accuracy of simulation results, the parameter for simulation includes the quantum yield of singlet oxygen generation, the accumulated concentration of photosensitizer within tissue, fluence rate, molar extinction coefficient at the wavelength of excitation light. The simulation is useful for evaluation of treatment region of iPDT with 5-ALA.

  3. Monte Carlo Simulation of a Segmented Detector for Low-Energy Electron Antineutrinos

    NASA Astrophysics Data System (ADS)

    Qomi, H. Akhtari; Safari, M. J.; Davani, F. Abbasi

    2017-11-01

    Detection of low-energy electron antineutrinos is of importance for several purposes, such as ex-vessel reactor monitoring, neutrino oscillation studies, etc. The inverse beta decay (IBD) is the interaction that is responsible for detection mechanism in (organic) plastic scintillation detectors. Here, a detailed study will be presented dealing with the radiation and optical transport simulation of a typical segmented antineutrino detector withMonte Carlo method using MCNPX and FLUKA codes. This study shows different aspects of the detector, benefiting from inherent capabilities of the Monte Carlo simulation codes.

  4. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  5. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  6. Statistical differences between relative quantitative molecular fingerprints from microbial communities.

    PubMed

    Portillo, M C; Gonzalez, J M

    2008-08-01

    Molecular fingerprints of microbial communities are a common method for the analysis and comparison of environmental samples. The significance of differences between microbial community fingerprints was analyzed considering the presence of different phylotypes and their relative abundance. A method is proposed by simulating coverage of the analyzed communities as a function of sampling size applying a Cramér-von Mises statistic. Comparisons were performed by a Monte Carlo testing procedure. As an example, this procedure was used to compare several sediment samples from freshwater ponds using a relative quantitative PCR-DGGE profiling technique. The method was able to discriminate among different samples based on their molecular fingerprints, and confirmed the lack of differences between aliquots from a single sample.

  7. Coupling of Multiple Coulomb Scattering with Energy Loss and Straggling in HZETRN

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Wilson, John W.; Walker, Steven A.; Tweed, John

    2007-01-01

    The new version of the HZETRN deterministic transport code based on Green's function methods, and the incorporation of ground-based laboratory boundary conditions, has lead to the development of analytical and numerical procedures to include off-axis dispersion of primary ion beams due to small-angle multiple Coulomb scattering. In this paper we present the theoretical formulation and computational procedures to compute ion beam broadening and a methodology towards achieving a self-consistent approach to coupling multiple scattering interactions with ionization energy loss and straggling. Our initial benchmark case is a 60 MeV proton beam on muscle tissue, for which we can compare various attributes of beam broadening with Monte Carlo simulations reported in the open literature.

  8. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  9. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  10. Monte Carlo simulations of time-of-flight PET with double-ended readout: calibration, coincidence resolving times and statistical lower bounds

    NASA Astrophysics Data System (ADS)

    E Derenzo, Stephen

    2017-05-01

    This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3  ×  3  ×  30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J  =  0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J  =  0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J  =  0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.

  11. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  12. Generating survival times to simulate Cox proportional hazards models with time-varying covariates.

    PubMed

    Austin, Peter C

    2012-12-20

    Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  14. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  15. A Bayesian nonparametric approach to dynamical noise reduction

    NASA Astrophysics Data System (ADS)

    Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2018-06-01

    We propose a Bayesian nonparametric approach for the noise reduction of a given chaotic time series contaminated by dynamical noise, based on Markov Chain Monte Carlo methods. The underlying unknown noise process (possibly) exhibits heavy tailed behavior. We introduce the Dynamic Noise Reduction Replicator model with which we reconstruct the unknown dynamic equations and in parallel we replicate the dynamics under reduced noise level dynamical perturbations. The dynamic noise reduction procedure is demonstrated specifically in the case of polynomial maps. Simulations based on synthetic time series are presented.

  16. An Overview of Importance Splitting for Rare Event Simulation

    ERIC Educational Resources Information Center

    Morio, Jerome; Pastel, Rudy; Le Gland, Francois

    2010-01-01

    Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…

  17. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  18. A smart Monte Carlo procedure for production costing and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, C.; Stremel, J.

    1996-11-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less

  19. Correction of beam-beam effects in luminosity measurement in the forward region at CLIC

    NASA Astrophysics Data System (ADS)

    Lukić, S.; Božović-Jelisavčić, I.; Pandurović, M.; Smiljanić, I.

    2013-05-01

    Procedures for correcting the beam-beam effects in luminosity measurements at CLIC at 3 TeV center-of-mass energy are described and tested using Monte Carlo simulations. The angular counting loss due to the combined Beamstrahlung and initial-state radiation effects is corrected based on the reconstructed velocity of the collision frame of the Bhabha scattering. The distortion of the luminosity spectrum due to the initial-state radiation is corrected by deconvolution. At the end, the counting bias due to the finite calorimeter energy resolution is numerically corrected. To test the procedures, BHLUMI Bhabha event generator, and Guinea-Pig beam-beam simulation were used to generate the outgoing momenta of Bhabha particles in the bunch collisions at CLIC. The systematic effects of the beam-beam interaction on the luminosity measurement are corrected with precision of 1.4 permille in the upper 5% of the energy, and 2.7 permille in the range between 80 and 90% of the nominal center-of-mass energy.

  20. A modified Monte Carlo model for the ionospheric heating rates

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.

    1972-01-01

    A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.

  1. The Multiple-Minima Problem in Protein Folding

    NASA Astrophysics Data System (ADS)

    Scheraga, Harold A.

    1991-10-01

    The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.

  2. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  3. Monte Carlo estimation of radiation dose in organs of female and male adult phantoms due to FDG-F18 absorbed in the lungs

    NASA Astrophysics Data System (ADS)

    Belinato, Walmir; Santos, William S.; Silva, Rogério M. V.; Souza, Divanizia N.

    2014-03-01

    The determination of dose conversion factors (S values) for the radionuclide fluorodeoxyglucose (18F-FDG) absorbed in the lungs during a positron emission tomography (PET) procedure was calculated using the Monte Carlo method (MCNPX version 2.7.0). For the obtained dose conversion factors of interest, it was considered a uniform absorption of radiopharmaceutical by the lung of a healthy adult human. The spectrum of fluorine was introduced in the input data file for the simulation. The simulation took place in two adult phantoms of both sexes, based on polygon mesh surfaces called FASH and MASH with anatomy and posture according to ICRP 89. The S values for the 22 internal organs/tissues, chosen from ICRP No. 110, for the FASH and MASH phantoms were compared with the results obtained from a MIRD V phantoms called ADAM and EVA used by the Committee on Medical Internal Radiation Dose (MIRD). We observed variation of more than 100% in S values due to structural anatomical differences in the internal organs of the MASH and FASH phantoms compared to the mathematical phantom.

  4. Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo

    DOE PAGES

    White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; ...

    2015-07-07

    Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudhyadhom, A; McGuinness, C; Descovich, M

    Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less

  6. Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.

    PubMed

    Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A

    2016-06-01

    The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.

  7. Microcanonical-ensemble computer simulation of the high-temperature expansion coefficients of the Helmholtz free energy of a square-well fluid

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Moreno-Hilario, Elizabeth; Sotelo-Serna, Maria Guadalupe; Gil-Villegas, Alejandro

    2018-02-01

    The microcanonical-ensemble computer simulation method (MCE) is used to evaluate the perturbation terms Ai of the Helmholtz free energy of a square-well (SW) fluid. The MCE method offers a very efficient and accurate procedure for the determination of perturbation terms of discrete-potential systems such as the SW fluid and surpass the standard NVT canonical ensemble Monte Carlo method, allowing the calculation of the first six expansion terms. Results are presented for the case of a SW potential with attractive ranges 1.1 ≤ λ ≤ 1.8. Using semi-empirical representation of the MCE values for Ai, we also discuss the accuracy in the determination of the phase diagram of this system.

  8. Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method

    NASA Technical Reports Server (NTRS)

    Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.

    1990-01-01

    A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.

  9. Confidence intervals for the first crossing point of two hazard functions.

    PubMed

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  10. Mean glandular dose to patients from stereotactic breast biopsy procedures.

    PubMed

    Paixão, Lucas; Chevalier, Margarita; Hurtado-Romero, Antonio E; Garayoa, Julia

    2018-06-07

    The aim of this work is to study the radiation doses delivered to a group of patients that underwent a stereotactic breast biopsy (SBB) procedure. Mean glandular doses (MGD) were estimated from the air-kerma measured at the breast surface entrance multiplying by specific conversion coefficients (DgN) that were estimated using Monte Carlo simulations. DgN were calculated for the 0º and ±15º projections used in SBB and for the particular beam quality. Data on 61 patients were collected showing that a typical SBB procedure is composed by 10 images. MGD was on average (4 ± 2) mGy with (0.38 ± 0.06) mGy per image. The use of specific conversion coefficients instead of typical DgN for mammography/tomosynthesis yields to obtain MGD values for SBB that are around a 65% lower on average. © 2018 Institute of Physics and Engineering in Medicine.

  11. Testing equality and interval estimation in binary responses when high dose cannot be used first under a three-period crossover design.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2015-01-01

    When comparing two doses of a new drug with a placebo, we may consider using a crossover design subject to the condition that the high dose cannot be administered before the low dose. Under a random-effects logistic regression model, we focus our attention on dichotomous responses when the high dose cannot be used first under a three-period crossover trial. We derive asymptotic test procedures for testing equality between treatments. We further derive interval estimators to assess the magnitude of the relative treatment effects. We employ Monte Carlo simulation to evaluate the performance of these test procedures and interval estimators in a variety of situations. We use the data taken as a part of trial comparing two different doses of an analgesic with a placebo for the relief of primary dysmenorrhea to illustrate the use of the proposed test procedures and estimators.

  12. An Extension of the Chi-Square Procedure for Non-NORMAL Statistics, with Application to Solar Neutrino Data

    NASA Astrophysics Data System (ADS)

    Sturrock, P. A.

    2008-01-01

    Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.

  13. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  14. Monte Carlo simulation of biomolecular systems with BIOMCSIM

    NASA Astrophysics Data System (ADS)

    Kamberaj, H.; Helms, V.

    2001-12-01

    A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.

  15. Self-Learning Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.

  16. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  17. Monte Carlo Particle Lists: MCPL

    NASA Astrophysics Data System (ADS)

    Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.

    2017-09-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  18. Baseball Monte Carlo Style.

    ERIC Educational Resources Information Center

    Houser, Larry L.

    1981-01-01

    Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)

  19. Molecular Simulations of The Formation of Gold-Molecule-Gold Junctions

    NASA Astrophysics Data System (ADS)

    Wang, Huachuan

    2013-03-01

    We perform classical molecular simulations by combining grand canonical Monte Carlo (GCMC) sampling with molecular dynamics (MD) simulation to explore the dynamic gold nanojunctions in a Alkenedithiol (ADT) solvent. With the aid of a simple driving-spring model, which can reasonably represent the long-range elasticity of the gold electrode, the spring forces are obtained during the dynamic stretching procedure. A specific multi-time-scale double reversible reference system propagator (double-RESPA) algorithm has been designed for the metal-organic complex in MD simulations to identify the detailed metal-molecule bonding geometry at metal-molecule-metal interface. We investigate the variations of bonding sites of ADT molecules on gold nanojunctions at Au (111) surface at a constant chemical potential. Simulation results show that an Au-ADT-Au interface is formed on Au nanojunctions, bond-breaking intersection is at 1-1 bond of the monatomic chain of the cross-section, instead of at the Au-S bond. Breaking force is around 1.5 nN. These are consistent with the experimental measurements.

  20. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  1. Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki

    2018-05-01

    We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.

  2. OBJECT KINETIC MONTE CARLO SIMULATIONS OF MICROSTRUCTURE EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.

    2013-09-30

    The objective is to report the development of the flexible object kinetic Monte Carlo (OKMC) simulation code KSOME (kinetic simulation of microstructure evolution) which can be used to simulate microstructure evolution of complex systems under irradiation. In this report we briefly describe the capabilities of KSOME and present preliminary results for short term annealing of single cascades in tungsten at various primary-knock-on atom (PKA) energies and temperatures.

  3. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 2. Transport results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.

  4. Inferential Precision in Single-Case Time-Series Data Streams: How Well Does the EM Procedure Perform When Missing Observations Occur in Autocorrelated Data?

    PubMed Central

    Smith, Justin D.; Borckardt, Jeffrey J.; Nash, Michael R.

    2013-01-01

    The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977).This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. PMID:22697454

  5. Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios/Bayes factors.

    PubMed

    Morrison, Geoffrey Stewart; Poh, Norman

    2018-05-01

    When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  7. Numerical and experimental analyses of the radiant heat flux produced by quartz heating systems

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Ash, Robert L.

    1994-01-01

    A method is developed for predicting the radiant heat flux distribution produced by tungsten filament, tubular fused-quartz envelope heating systems with reflectors. The method is an application of Monte Carlo simulation, which takes the form of a random walk or ray tracing scheme. The method is applied to four systems of increasing complexity, including a single lamp without a reflector, a single lamp with a Hat reflector, a single lamp with a parabolic reflector, and up to six lamps in a six-lamp contoured-reflector heating unit. The application of the Monte Carlo method to the simulation of the thermal radiation generated by these systems is discussed. The procedures for numerical implementation are also presented. Experiments were conducted to study these quartz heating systems and to acquire measurements of the corresponding empirical heat flux distributions for correlation with analysis. The experiments were conducted such that several complicating factors could be isolated and studied sequentially. Comparisons of the experimental results with analysis are presented and discussed. Good agreement between the experimental and simulated results was obtained in all cases. This study shows that this method can be used to analyze very complicated quartz heating systems and can account for factors such as spectral properties, specular reflection from curved surfaces, source enhancement due to reflectors and/or adjacent sources, and interaction with a participating medium in a straightforward manner.

  8. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  9. Using beta coefficients to impute missing correlations in meta-analysis research: Reasons for caution.

    PubMed

    Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip

    2018-06-01

    Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. SU-E-T-507: Internal Dosimetry in Nuclear Medicine Using GATE and XCAT Phantom: A Simulation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fallahpoor, M; Abbasi, M; Sen, A

    Purpose Monte Carlo simulations are routinely used for internal dosimetry studies. These studies are conducted with humanoid phantoms such as the XCAT phantom. In this abstract we present the absorbed doses for various pairs of source and target organs using three common radiotracers in nuclear medicine. Methods The GATE software package is used for the Monte Carlo simulations. A typical female XCAT phantom is used as the input. Three radiotracers 153Sm, 131I and 99mTc are studied. The Specific Absorbed Fraction (SAF) for gamma rays (99mTc, 153Sm and 131I) and Specific Fraction (SF) for beta particles (153Sm and 131I) are calculatedmore » for all 100 pairs of source target organs including brain, liver, lung, pancreas, kidney, adrenal, spleen, rib bone, bladder and ovaries. Results The source organs themselves gain the highest absorbed dose as compared to other organs. The dose is found to be inversely proportional to distance from the source organ. In SAF results of 153Sm, when the source organ is lung, the rib bone, gain 0.0730 (Kg-1) that is more than lung itself. Conclusion The absorbed dose for various organs was studied in terms of SAF and SF. Such studies hold importance for future therapeutic procedures and optimization of induced radiotracer.« less

  11. A Monte Carlo method using octree structure in photon and electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogawa, K.; Maeda, S.

    Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that withmore » electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting.« less

  12. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less

  13. SU-E-T-222: Computational Optimization of Monte Carlo Simulation On 4D Treatment Planning Using the Cloud Computing Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, J

    Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less

  14. SU-F-T-149: Development of the Monte Carlo Simulation Platform Using Geant4 for Designing Heavy Ion Therapy Beam Nozzle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Jae-ik; Yoo, SeungHoon; Cho, Sungho

    Purpose: The significant issue of particle therapy such as proton and carbon ion was a accurate dose delivery from beam line to patient. For designing the complex delivery system, Monte Carlo simulation can be used for the simulation of various physical interaction in scatters and filters. In this report, we present the development of Monte Carlo simulation platform to help design the prototype of particle therapy nozzle and performed the Monte Carlo simulation using Geant4. Also we show the prototype design of particle therapy beam nozzle for Korea Heavy Ion Medical Accelerator (KHIMA) project in Korea Institute of Radiological andmore » Medical Science(KIRAMS) at Republic of Korea. Methods: We developed a simulation platform for particle therapy beam nozzle using Geant4. In this platform, the prototype nozzle design of Scanning system for carbon was simply designed. For comparison with theoretic beam optics, the beam profile on lateral distribution at isocenter is compared with Mont Carlo simulation result. From the result of this analysis, we can expected the beam spot property of KHIMA system and implement the spot size optimization for our spot scanning system. Results: For characteristics study of scanning system, various combination of the spot size from accerlator with ridge filter and beam monitor was tested as simple design for KHIMA dose delivery system. Conclusion: In this report, we presented the part of simulation platform and the characteristics study. This study is now on-going in order to develop the simulation platform including the beam nozzle and the dose verification tool with treatment planning system. This will be presented as soon as it is become available.« less

  15. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  16. MO-FG-BRA-01: 4D Monte Carlo Simulations for Verification of Dose Delivered to a Moving Anatomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gholampourkashi, S; Cygler, J E.; The Ottawa Hospital Cancer Centre, Ottawa, ON

    Purpose: To validate 4D Monte Carlo (MC) simulations of dose delivery by an Elekta Agility linear accelerator to a moving phantom. Methods: Monte Carlo simulations were performed using the 4DdefDOSXYZnrc/EGSnrc user code which samples a new geometry for each incident particle and calculates the dose in a continuously moving anatomy. A Quasar respiratory motion phantom with a lung insert containing a 3 cm diameter tumor was used for dose measurements on an Elekta Agility linac with the phantom in stationary and moving states. Dose to the center of tumor was measured using calibrated EBT3 film and the RADPOS 4D dosimetrymore » system. A VMAT plan covering the tumor was created on the static CT scan of the phantom using Monaco V.5.10.02. A validated BEAMnrc model of our Elekta Agility linac was used for Monte Carlo simulations on stationary and moving anatomies. To compare the planned and delivered doses, linac log files recorded during measurements were used for the simulations. For 4D simulations, deformation vectors that modeled the rigid translation of the lung insert were generated as input to the 4DdefDOSXYZnrc code as well as the phantom motion trace recorded with RADPOS during the measurements. Results: Monte Carlo simulations and film measurements were found to agree within 2mm/2% for 97.7% of points in the film in the static phantom and 95.5% in the moving phantom. Dose values based on film and RADPOS measurements are within 2% of each other and within 2σ of experimental uncertainties with respect to simulations. Conclusion: Our 4D Monte Carlo simulation using the defDOSXYZnrc code accurately calculates dose delivered to a moving anatomy. Future work will focus on more investigation of VMAT delivery on a moving phantom to improve the agreement between simulation and measurements, as well as establishing the accuracy of our method in a deforming anatomy. This work was supported by the Ontario Consortium of Adaptive Interventions in Radiation Oncology (OCAIRO), funded by the Ontario Research Fund Research Excellence program.« less

  17. Effect of the multiple scattering of electrons in Monte Carlo simulation of LINACS.

    PubMed

    Vilches, Manuel; García-Pareja, Salvador; Guerrero, Rafael; Anguiano, Marta; Lallena, Antonio M

    2008-01-01

    Results obtained from Monte Carlo simulations of the transport of electrons in thin slabs of dense material media and air slabs with different widths are analyzed. Various general purpose Monte Carlo codes have been used: PENELOPE, GEANT3, GEANT4, EGSNRC, MCNPX. Non-negligible differences between the angular and radial distributions after the slabs have been found. The effects of these differences on the depth doses measured in water are also discussed.

  18. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  19. Estimating radiation dose to organs of patients undergoing conventional and novel multidetector CT exams using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Angel, Erin

    Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.

  20. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Ellis; Derek Gaston; Benoit Forget

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less

  1. Multi-Conformation Monte Carlo: A Method for Introducing Flexibility in Efficient Simulations of Many-Protein Systems.

    PubMed

    Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J

    2016-08-25

    We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.

  2. Comparison of effects of copropagated and precomputed atmosphere profiles on Monte Carlo trajectory simulation

    NASA Technical Reports Server (NTRS)

    Queen, Eric M.; Omara, Thomas M.

    1990-01-01

    A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.

  3. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  4. Inelastic neutron scattering of large molecular systems: The case of the original benzylic amide [2]catenane

    NASA Astrophysics Data System (ADS)

    Caciuffo, Roberto; Esposti, Alessandra Degli; Deleuze, Michael S.; Leigh, David A.; Murphy, Aden; Paci, Barbara; Parker, Stewart F.; Zerbetto, Francesco

    1998-12-01

    The inelastic neutron scattering (INS) spectrum of the original benzylic amide [2]catenane is recorded and simulated by a semiempirical quantum chemical procedure coupled with the most comprehensive approach available to date, the CLIMAX program. The successful simulation of the spectrum indicates that the modified neglect of differential overlap (MNDO) model can reproduce the intramolecular vibrations of a molecular system as large as a catenane (136 atoms). Because of the computational costs involved and some numerical instabilities, a less expensive approach is attempted which involves the molecular mechanics-based calculation of the INS response in terms of the most basic formulation for the scattering activity. The encouraging results obtained validate the less computationally intensive procedure and allow its extension to the calculation of the INS spectrum for a second, theoretical, co-conformer, which, although structurally and energetically reasonable, is not, in fact, found in the solid state. The second structure was produced by a Monte Carlo simulated annealing method run in the conformational space (a procedure that would have been prohibitively expensive at the semiempirical level) and is characterized by a higher degree of intramolecular hydrogen bonding than the x-ray structure. The two alternative structures yield different simulated spectra, only one of which, the authentic one, is compatible with the experimental data. Comparison of the two simulated and experimental spectra affords the identification of an inelastic neutron scattering spectral signature of the correct hydrogen bonding motif in the region slightly above 700 cm-1. The study illustrates that combinations of simulated INS data and experimental results can be successfully used to discriminate between different proposed structures or possible hydrogen bonding motifs in large functional molecular systems.

  5. ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations

    NASA Astrophysics Data System (ADS)

    Freitag, Marc Dewi

    2013-02-01

    ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).

  6. Data decomposition of Monte Carlo particle transport simulations via tally servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less

  7. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  8. Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely

    2015-12-01

    We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.

  9. Study of photo-oxidative reactivity of sunscreening agents based on photo-oxidation of uric acid by kinetic Monte Carlo simulation.

    PubMed

    Moradmand Jalali, Hamed; Bashiri, Hadis; Rasa, Hossein

    2015-05-01

    In the present study, the mechanism of free radical production by light-reflective agents in sunscreens (TiO2, ZnO and ZrO2) was obtained by applying kinetic Monte Carlo simulation. The values of the rate constants for each step of the suggested mechanism have been obtained by simulation. The effect of the initial concentration of mineral oxides and uric acid on the rate of uric acid photo-oxidation by irradiation of some sun care agents has been studied. The kinetic Monte Carlo simulation results agree qualitatively with the existing experimental data for the production of free radicals by sun care agents. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Planck 2015 results: VI. LFI mapmaking

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Ashdown, M.; ...

    2016-09-20

    This article describes the mapmaking procedure applied to Planck Low Frequency Instrument (LFI) data. The mapmaking step takes as input the calibrated timelines and pointing information. The main products are sky maps of I, Q, and U Stokes components. For the first time, we present polarization maps at LFI frequencies. The mapmaking algorithm is based on a destriping technique, which is enhanced with a noise prior. The Galactic region is masked to reduce errors arising from bandpass mismatch and high signal gradients. We apply horn-uniform radiometer weights to reduce the effects of beam-shape mismatch. The algorithm is the same asmore » used for the 2013 release, apart from small changes in parameter settings. We validate the procedure through simulations. Special emphasis is put on the control of systematics, which is particularly important for accurate polarization analysis. We also produce low-resolution versions of the maps and corresponding noise covariance matrices. These serve as input in later analysis steps and parameter estimation. The noise covariance matrices are validated through noise Monte Carlo simulations. The residual noise in the map products is characterized through analysis of half-ring maps, noise covariance matrices, and simulations.« less

  11. Simulation Procedure for Lifelong Flight Behavior of Electrons Consistent in Three Domains of Time t, Position x, and Energy ɛ

    NASA Astrophysics Data System (ADS)

    Ikuta, Nobuaki; Takeda, Akihide

    2017-12-01

    Research on the flight behavior of electrons and ions in a gas under an electric field has recently moved in a direction of clarifying the mechanism of the spatiotemporal development of a swarm, but the symbolic unknown state function f(r,c,t) of the Boltzmann equation has not been obtained in an explicit form. However, a few papers on the spatiotemporal development of an electron swarm using the Monte Carlo simulation have been published. On the other hand, a new simulation procedure for obtaining the lifelong state function FfT(t,x,ɛ) and local transport quantities J(t,x,ɛ) of electrons in the three domains of time t, one-dimensional position x, and energy ɛ under arbitrary initial and boundary conditions has been developed by extending the flight-time-integral (FTI) methods previously reported and is named the 3D-FTI method. A preliminary calculation has shown that this method can extensively provide the flight behavior of individual electrons in a swarm and local transport quantities consistent in the three domains with reasonable accuracy and career dependences.

  12. Deterministic theory of Monte Carlo variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueki, T.; Larsen, E.W.

    1996-12-31

    The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less

  13. Recommender engine for continuous-time quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  14. Efficient Monte Carlo Methods for Biomolecular Simulations.

    NASA Astrophysics Data System (ADS)

    Bouzida, Djamal

    A new approach to efficient Monte Carlo simulations of biological molecules is presented. By relaxing the usual restriction to Markov processes, we are able to optimize performance while dealing directly with the inhomogeneity and anisotropy inherent in these systems. The advantage of this approach is that we can introduce a wide variety of Monte Carlo moves to deal with complicated motions of the molecule, while maintaining full optimization at every step. This enables the use of a variety of collective rotational moves that relax long-wavelength modes. We were able to show by explicit simulations that the resulting algorithms substantially increase the speed of the simulation while reproducing the correct equilibrium behavior. This approach is particularly intended for simulations of macromolecules, although we expect it to be useful in other situations. The dynamic optimization of the new Monte Carlo methods makes them very suitable for simulated annealing experiments on all systems whose state space is continuous in general, and to the protein folding problem in particular. We introduce an efficient annealing schedule using preferential bias moves. Our simulated annealing experiments yield structures whose free energies were lower than the equilibrated X-ray structure, which leads us to believe that the empirical energy function used does not fully represent the interatomic interactions. Furthermore, we believe that the largest discrepancies involve the solvent effects in particular.

  15. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  16. RECORDS: improved Reporting of montE CarlO RaDiation transport Studies: Report of the AAPM Research Committee Task Group 268.

    PubMed

    Sechopoulos, Ioannis; Rogers, D W O; Bazalova-Carter, Magdalena; Bolch, Wesley E; Heath, Emily C; McNitt-Gray, Michael F; Sempau, Josep; Williamson, Jeffrey F

    2018-01-01

    Studies involving Monte Carlo simulations are common in both diagnostic and therapy medical physics research, as well as other fields of basic and applied science. As with all experimental studies, the conditions and parameters used for Monte Carlo simulations impact their scope, validity, limitations, and generalizability. Unfortunately, many published peer-reviewed articles involving Monte Carlo simulations do not provide the level of detail needed for the reader to be able to properly assess the quality of the simulations. The American Association of Physicists in Medicine Task Group #268 developed guidelines to improve reporting of Monte Carlo studies in medical physics research. By following these guidelines, manuscripts submitted for peer-review will include a level of relevant detail that will increase the transparency, the ability to reproduce results, and the overall scientific value of these studies. The guidelines include a checklist of the items that should be included in the Methods, Results, and Discussion sections of manuscripts submitted for peer-review. These guidelines do not attempt to replace the journal reviewer, but rather to be a tool during the writing and review process. Given the varied nature of Monte Carlo studies, it is up to the authors and the reviewers to use this checklist appropriately, being conscious of how the different items apply to each particular scenario. It is envisioned that this list will be useful both for authors and for reviewers, to help ensure the adequate description of Monte Carlo studies in the medical physics literature. © 2017 American Association of Physicists in Medicine.

  17. Neoclassical toroidal viscosity calculations in tokamaks using a δf Monte Carlo simulation and their verifications.

    PubMed

    Satake, S; Park, J-K; Sugama, H; Kanno, R

    2011-07-29

    Neoclassical toroidal viscosities (NTVs) in tokamaks are investigated using a δf Monte Carlo simulation, and are successfully verified with a combined analytic theory over a wide range of collisionality. A Monte Carlo simulation has been required in the study of NTV since the complexities in guiding-center orbits of particles and their collisions cannot be fully investigated by any means of analytic theories alone. Results yielded the details of the complex NTV dependency on particle precessions and collisions, which were predicted roughly in a combined analytic theory. Both numerical and analytic methods can be utilized and extended based on these successful verifications.

  18. Improved radial dose function estimation using current version MCNP Monte-Carlo simulation: Model 6711 and ISC3500 125I brachytherapy sources.

    PubMed

    Duggan, Dennis M

    2004-12-01

    Improved cross-sections in a new version of the Monte-Carlo N-particle (MCNP) code may eliminate discrepancies between radial dose functions (as defined by American Association of Physicists in Medicine Task Group 43) derived from Monte-Carlo simulations of low-energy photon-emitting brachytherapy sources and those from measurements on the same sources with thermoluminescent dosimeters. This is demonstrated for two 125I brachytherapy seed models, the Implant Sciences Model ISC3500 (I-Plant) and the Amersham Health Model 6711, by simulating their radial dose functions with two versions of MCNP, 4c2 and 5.

  19. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials.

    PubMed

    Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai

    2014-11-10

    Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.

  20. War-gaming application for future space systems acquisition: MATLAB implementation of war-gaming acquisition models and simulation results

    NASA Astrophysics Data System (ADS)

    Vienhage, Paul; Barcomb, Heather; Marshall, Karel; Black, William A.; Coons, Amanda; Tran, Hien T.; Nguyen, Tien M.; Guillen, Andy T.; Yoh, James; Kizer, Justin; Rogers, Blake A.

    2017-05-01

    The paper describes the MATLAB (MathWorks) programs that were developed during the REU workshop1 to implement The Aerospace Corporation developed Unified Game-based Acquisition Framework and Advanced Game - based Mathematical Framework (UGAF-AGMF) and its associated War-Gaming Engine (WGE) models. Each game can be played from the perspectives of the Department of Defense Acquisition Authority (DAA) or of an individual contractor (KTR). The programs also implement Aerospace's optimum "Program and Technical Baseline (PTB) and associated acquisition" strategy that combines low Total Ownership Cost (TOC) with innovative designs while still meeting warfighter needs. The paper also describes the Bayesian Acquisition War-Gaming approach using Monte Carlo simulations, a numerical analysis technique to account for uncertainty in decision making, which simulate the PTB development and acquisition processes and will detail the procedure of the implementation and the interactions between the games.

  1. Simulation of Satellite, Airborne and Terrestrial LiDAR with DART (I):Waveform Simulation with Quasi-Monte Carlo Ray Tracing

    NASA Technical Reports Server (NTRS)

    Gastellu-Etchegorry, Jean-Philippe; Yin, Tiangang; Lauret, Nicolas; Grau, Eloi; Rubio, Jeremy; Cook, Bruce D.; Morton, Douglas C.; Sun, Guoqing

    2016-01-01

    Light Detection And Ranging (LiDAR) provides unique data on the 3-D structure of atmosphere constituents and the Earth's surface. Simulating LiDAR returns for different laser technologies and Earth scenes is fundamental for evaluating and interpreting signal and noise in LiDAR data. Different types of models are capable of simulating LiDAR waveforms of Earth surfaces. Semi-empirical and geometric models can be imprecise because they rely on simplified simulations of Earth surfaces and light interaction mechanisms. On the other hand, Monte Carlo ray tracing (MCRT) models are potentially accurate but require long computational time. Here, we present a new LiDAR waveform simulation tool that is based on the introduction of a quasi-Monte Carlo ray tracing approach in the Discrete Anisotropic Radiative Transfer (DART) model. Two new approaches, the so-called "box method" and "Ray Carlo method", are implemented to provide robust and accurate simulations of LiDAR waveforms for any landscape, atmosphere and LiDAR sensor configuration (view direction, footprint size, pulse characteristics, etc.). The box method accelerates the selection of the scattering direction of a photon in the presence of scatterers with non-invertible phase function. The Ray Carlo method brings traditional ray-tracking into MCRT simulation, which makes computational time independent of LiDAR field of view (FOV) and reception solid angle. Both methods are fast enough for simulating multi-pulse acquisition. Sensitivity studies with various landscapes and atmosphere constituents are presented, and the simulated LiDAR signals compare favorably with their associated reflectance images and Laser Vegetation Imaging Sensor (LVIS) waveforms. The LiDAR module is fully integrated into DART, enabling more detailed simulations of LiDAR sensitivity to specific scene elements (e.g., atmospheric aerosols, leaf area, branches, or topography) and sensor configuration for airborne or satellite LiDAR sensors.

  2. An Energy-Dispersive X-Ray Fluorescence Spectrometry and Monte Carlo simulation study of Iron-Age Nuragic small bronzes ("Navicelle") from Sardinia, Italy

    NASA Astrophysics Data System (ADS)

    Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio

    2016-09-01

    A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.

  3. A Monte Carlo simulation study of associated liquid crystals

    NASA Astrophysics Data System (ADS)

    Berardi, R.; Fehervari, M.; Zannoni, C.

    We have performed a Monte Carlo simulation study of a system of ellipsoidal particles with donor-acceptor sites modelling complementary hydrogen-bonding groups in real molecules. We have considered elongated Gay-Berne particles with terminal interaction sites allowing particles to associate and form dimers. The changes in the phase transitions and in the molecular organization and the interplay between orientational ordering and dimer formation are discussed. Particle flip and dimer moves have been used to increase the convergency rate of the Monte Carlo (MC) Markov chain.

  4. Raman Monte Carlo simulation for light propagation for tissue with embedded objects

    NASA Astrophysics Data System (ADS)

    Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit

    2018-02-01

    Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.

  5. Calibration of a portable HPGe detector using MCNP code for the determination of 137Cs in soils.

    PubMed

    Gutiérrez-Villanueva, J L; Martín-Martín, A; Peña, V; Iniguez, M P; de Celis, B; de la Fuente, R

    2008-10-01

    In situ gamma spectrometry provides a fast method to determine (137)Cs inventories in soils. To improve the accuracy of the estimates, one can use not only the information on the photopeak count rates but also on the peak to forward-scatter ratios. Before applying this procedure to field measurements, a calibration including several experimental simulations must be carried out in the laboratory. In this paper it is shown that Monte Carlo methods are a valuable tool to minimize the number of experimental measurements needed for the calibration.

  6. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  7. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis

    USDA-ARS?s Scientific Manuscript database

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...

  8. Play It Again: Teaching Statistics with Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Sigal, Matthew J.; Chalmers, R. Philip

    2016-01-01

    Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…

  9. Monte Carlo simulation of MOSFET dosimeter for electron backscatter using the GEANT4 code.

    PubMed

    Chow, James C L; Leung, Michael K K

    2008-06-01

    The aim of this study is to investigate the influence of the body of the metal-oxide-semiconductor field effect transistor (MOSFET) dosimeter in measuring the electron backscatter from lead. The electron backscatter factor (EBF), which is defined as the ratio of dose at the tissue-lead interface to the dose at the same point without the presence of backscatter, was calculated by the Monte Carlo simulation using the GEANT4 code. Electron beams with energies of 4, 6, 9, and 12 MeV were used in the simulation. It was found that in the presence of the MOSFET body, the EBFs were underestimated by about 2%-0.9% for electron beam energies of 4-12 MeV, respectively. The trend of the decrease of EBF with an increase of electron energy can be explained by the small MOSFET dosimeter, mainly made of epoxy and silicon, not only attenuated the electron fluence of the electron beam from upstream, but also the electron backscatter generated by the lead underneath the dosimeter. However, this variation of the EBF underestimation is within the same order of the statistical uncertainties as the Monte Carlo simulations, which ranged from 1.3% to 0.8% for the electron energies of 4-12 MeV, due to the small dosimetric volume. Such small EBF deviation is therefore insignificant when the uncertainty of the Monte Carlo simulation is taken into account. Corresponding measurements were carried out and uncertainties compared to Monte Carlo results were within +/- 2%. Spectra of energy deposited by the backscattered electrons in dosimetric volumes with and without the lead and MOSFET were determined by Monte Carlo simulations. It was found that in both cases, when the MOSFET body is either present or absent in the simulation, deviations of electron energy spectra with and without the lead decrease with an increase of the electron beam energy. Moreover, the softer spectrum of the backscattered electron when lead is present can result in a reduction of the MOSFET response due to stronger recombination in the SiO2 gate. It is concluded that the MOSFET dosimeter performed well for measuring the electron backscatter from lead using electron beams. The uncertainty of EBF determined by comparing the results of Monte Carlo simulations and measurements is well within the accuracy of the MOSFET dosimeter (< +/- 4.2%) provided by the manufacturer.

  10. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  11. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  12. Self-optimized construction of transition rate matrices from accelerated atomistic simulations with Bayesian uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Swinburne, Thomas D.; Perez, Danny

    2018-05-01

    A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.

  13. Simulation of dense amorphous polymers by generating representative atomistic models

    NASA Astrophysics Data System (ADS)

    Curcó, David; Alemán, Carlos

    2003-08-01

    A method for generating atomistic models of dense amorphous polymers is presented. The generated models can be used as starting structures of Monte Carlo and molecular dynamics simulations, but also are suitable for the direct evaluation physical properties. The method is organized in a two-step procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, an iterative algorithm is applied to relax the nonbonding interactions. In order to check the performance of the method we examined structure-dependent properties for three polymeric systems: polyethyelene (ρ=0.85 g/cm3), poly(L,D-lactic) acid (ρ=1.25 g/cm3), and polyglycolic acid (ρ=1.50 g/cm3). The method successfully generated representative packings for such dense systems using minimum computational resources.

  14. Simulation of noisy dynamical system by Deep Learning

    NASA Astrophysics Data System (ADS)

    Yeo, Kyongmin

    2017-11-01

    Deep learning has attracted huge attention due to its powerful representation capability. However, most of the studies on deep learning have been focused on visual analytics or language modeling and the capability of the deep learning in modeling dynamical systems is not well understood. In this study, we use a recurrent neural network to model noisy nonlinear dynamical systems. In particular, we use a long short-term memory (LSTM) network, which constructs internal nonlinear dynamics systems. We propose a cross-entropy loss with spatial ridge regularization to learn a non-stationary conditional probability distribution from a noisy nonlinear dynamical system. A Monte Carlo procedure to perform time-marching simulations by using the LSTM is presented. The behavior of the LSTM is studied by using noisy, forced Van der Pol oscillator and Ikeda equation.

  15. A Simplified Model of Local Structure in Aqueous Proline Amino Acid Revealed by First-Principles Molecular Dynamics Simulations

    PubMed Central

    Troitzsch, Raphael Z.; Tulip, Paul R.; Crain, Jason; Martyna, Glenn J.

    2008-01-01

    Aqueous proline solutions are deceptively simple as they can take on complex roles such as protein chaperones, cryoprotectants, and hydrotropic agents in biological processes. Here, a molecular level picture of proline/water mixtures is developed. Car-Parrinello ab initio molecular dynamics (CPAIMD) simulations of aqueous proline amino acid at the B-LYP level of theory, performed using IBM's Blue Gene/L supercomputer and massively parallel software, reveal hydrogen-bonding propensities that are at odds with the predictions of the CHARMM22 empirical force field but are in better agreement with results of recent neutron diffraction experiments. In general, the CPAIMD (B-LYP) simulations predict a simplified structural model of proline/water mixtures consisting of fewer distinct local motifs. Comparisons of simulation results to experiment are made by direct evaluation of the neutron static structure factor S(Q) from CPAIMD (B-LYP) trajectories as well as to the results of the empirical potential structure refinement reverse Monte Carlo procedure applied to the neutron data. PMID:18790850

  16. A simplified model of local structure in aqueous proline amino acid revealed by first-principles molecular dynamics simulations.

    PubMed

    Troitzsch, Raphael Z; Tulip, Paul R; Crain, Jason; Martyna, Glenn J

    2008-12-01

    Aqueous proline solutions are deceptively simple as they can take on complex roles such as protein chaperones, cryoprotectants, and hydrotropic agents in biological processes. Here, a molecular level picture of proline/water mixtures is developed. Car-Parrinello ab initio molecular dynamics (CPAIMD) simulations of aqueous proline amino acid at the B-LYP level of theory, performed using IBM's Blue Gene/L supercomputer and massively parallel software, reveal hydrogen-bonding propensities that are at odds with the predictions of the CHARMM22 empirical force field but are in better agreement with results of recent neutron diffraction experiments. In general, the CPAIMD (B-LYP) simulations predict a simplified structural model of proline/water mixtures consisting of fewer distinct local motifs. Comparisons of simulation results to experiment are made by direct evaluation of the neutron static structure factor S(Q) from CPAIMD (B-LYP) trajectories as well as to the results of the empirical potential structure refinement reverse Monte Carlo procedure applied to the neutron data.

  17. Comparison of Geant4-DNA simulation of S-values with other Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    André, T.; Morini, F.; Karamitros, M.; Delorme, R.; Le Loirec, C.; Campos, L.; Champion, C.; Groetz, J.-E.; Fromm, M.; Bordage, M.-C.; Perrot, Y.; Barberet, Ph.; Bernal, M. A.; Brown, J. M. C.; Deleuze, M. S.; Francis, Z.; Ivanchenko, V.; Mascialino, B.; Zacharatou, C.; Bardiès, M.; Incerti, S.

    2014-01-01

    Monte Carlo simulations of S-values have been carried out with the Geant4-DNA extension of the Geant4 toolkit. The S-values have been simulated for monoenergetic electrons with energies ranging from 0.1 keV up to 20 keV, in liquid water spheres (for four radii, chosen between 10 nm and 1 μm), and for electrons emitted by five isotopes of iodine (131, 132, 133, 134 and 135), in liquid water spheres of varying radius (from 15 μm up to 250 μm). The results have been compared to those obtained from other Monte Carlo codes and from other published data. The use of the Kolmogorov-Smirnov test has allowed confirming the statistical compatibility of all simulation results.

  18. Sensitivity of the diagnostic radiological index of protection to procedural factors in fluoroscopy.

    PubMed

    Jones, A Kyle; Pasciak, Alexander S; Wagner, Louis K

    2016-07-01

    To evaluate the sensitivity of the diagnostic radiological index of protection (DRIP), used to quantify the protective value of radioprotective garments, to procedural factors in fluoroscopy in an effort to determine an appropriate set of scatter-mimicking primary beams to be used in measuring the DRIP. Monte Carlo simulations were performed to determine the shape of the scattered x-ray spectra incident on the operator in different clinical fluoroscopy scenarios, including interventional radiology and interventional cardiology (IC). Two clinical simulations studied the sensitivity of the scattered spectrum to gantry angle and patient size, while technical factors were varied according to measured automatic dose rate control (ADRC) data. Factorial simulations studied the sensitivity of the scattered spectrum to gantry angle, field of view, patient size, and beam quality for constant technical factors. Average energy (Eavg) was the figure of merit used to condense fluence in each energy bin to a single numerical index. Beam quality had the strongest influence on the scattered spectrum in fluoroscopy. Many procedural factors affect the scattered spectrum indirectly through their effect on primary beam quality through ADRC, e.g., gantry angle and patient size. Lateral C-arm rotation, common in IC, increased the energy of the scattered spectrum, regardless of the direction of rotation. The effect of patient size on scattered radiation depended on ADRC characteristics, patient size, and procedure type. The scattered spectrum striking the operator in fluoroscopy is most strongly influenced by primary beam quality, particularly kV. Use cases for protective garments should be classified by typical procedural primary beam qualities, which are governed by the ADRC according to the impacts of patient size, anatomical location, and gantry angle.

  19. A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14

    NASA Astrophysics Data System (ADS)

    Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.

    2000-03-01

    Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.

  20. Feasibility assessment of the interactive use of a Monte Carlo algorithm in treatment planning for intraoperative electron radiation therapy

    NASA Astrophysics Data System (ADS)

    Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés

    2014-12-01

    This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

  1. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  2. Fully kinetic simulations of magnetic reconnection in partially ionised gases

    NASA Astrophysics Data System (ADS)

    Innocenti, M. E.; Jiang, W.; Lapenta, G.; Markidis, S.

    2016-12-01

    Magnetic reconnection has been explored for decades as a way to convert magnetic energy into kinetic energy and heat and to accelerate particles in environments as different as the solar surface, planetary magnetospheres, the solar wind, accretion disks, laboratory plasmas. When studying reconnection via simulations, it is usually assumed that the plasma is fully ionised, as it is indeed the case in many of the above-mentioned cases. There are, however, exceptions, the most notable being the lower solar atmosphere. Small ionisation fractions are registered also in the warm neutral interstellar medium, in dense interstellar clouds, in protostellar and protoplanetary accreditation disks, in tokamak edge plasmas and in ad-hoc laboratory experiments [1]. We study here how magnetic reconnection is modified by the presence of a neutral background, i.e. when the majority of the gas is not ionised. The ionised plasma is simulated with the fully kinetic Particle-In-Cell (PIC) code iPic3D [2]. Collisions with the neutral background are introduced via a Monte Carlo plug-in. The standard Monte Carlo procedure [3] is employed to account for elastic, excitation and ionization electron-neutral collisions, as well as for elastic scattering and charge exchange ion-neutral collisions. Collisions with the background introduce resistivity in an otherwise collisionless plasma and modifications of the particle distribution functions: particles (and ions at a faster rate) tend to thermalise to the background. To pinpoint the consequences of this, we compare reconnection simulations with and without background. References [1] E E Lawrence et al. Physical review letters, 110(1):015001, 2013. [2] S Markidis et al. Mathematics and Computers in Simulation, 80(7):1509-1519, 2010. [3] K Nanbu. IEEE Transactions on plasma science, 28(3):971-990, 2000.

  3. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  4. Analysis of the Space Propulsion System Problem Using RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    diego mandelli; curtis smith; cristian rabiti

    This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less

  5. Potentials of mean force for biomolecular simulations: Theory and test on alanine dipeptide

    NASA Astrophysics Data System (ADS)

    Pellegrini, Matteo; Grønbech-Jensen, Niels; Doniach, Sebastian

    1996-06-01

    We describe a technique for generating potentials of mean force (PMF) between solutes in an aqueous solution. We first generate solute-solvent correlation functions (CF) using Monte Carlo (MC) simulations in which we place a single atom solute in a periodic boundary box containing a few hundred water molecules. We then make use of the Kirkwood superposition approximation, where the 3-body correlation function is approximated as the product of 2-body CFs, to describe the mean water density around two solutes. Computing the force generated on the solutes by this average water density allows us to compute potentials of mean force between the two solutes. For charged solutes an additional approximation involving dielectric screening is made, by setting the dielectric constant of water to ɛ=80. These potentials account, in an approximate manner, for the average effect of water on the atoms. Following the work of Pettitt and Karplus [Chem. Phys. Lett. 121, 194 (1985)], we approximate the n-body potential of mean force as a sum of the pairwise potentials of mean force. This allows us to run simulations of biomolecules without introducing explicit water, hence gaining several orders of magnitude in efficiency with respect to standard molecular dynamics techniques. We demonstrate the validity of this technique by first comparing the PMFs for methane-methane and sodium-chloride generated with this procedure, with those calculated with a standard Monte Carlo simulation with explicit water. We then compare the results of the free energy profiles between the equilibria of alanine dipeptide generated by the two methods.

  6. The effect of carrier gas flow rate and source cell temperature on low pressure organic vapor phase deposition simulation by direct simulation Monte Carlo method

    PubMed Central

    Wada, Takao; Ueda, Noriaki

    2013-01-01

    The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843

  7. Effect of Gold Nanoparticles on Prostate Dose Distribution under Ir-192 Internal and 18 MV External Radiotherapy Procedures Using Gel Dosimetry and Monte Carlo Method.

    PubMed

    Khosravi, H; Hashemi, B; Mahdavi, S R; Hejazi, P

    2015-03-01

    Gel polymers are considered as new dosimeters for determining radiotherapy dose distribution in three dimensions. The ability of a new formulation of MAGIC-f polymer gel was assessed by experimental measurement and Monte Carlo (MC) method for studying the effect of gold nanoparticles (GNPs) in prostate dose distributions under the internal Ir-192 and external 18MV radiotherapy practices. A Plexiglas phantom was made representing human pelvis. The GNP shaving 15 nm in diameter and 0.1 mM concentration were synthesized using chemical reduction method. Then, a new formulation of MAGIC-f gel was synthesized. The fabricated gel was poured in the tubes located at the prostate (with and without the GNPs) and bladder locations of the phantom. The phantom was irradiated to an Ir-192 source and 18 MV beam of a Varian linac separately based on common radiotherapy procedures used for prostate cancer. After 24 hours, the irradiated gels were read using a Siemens 1.5 Tesla MRI scanner. The absolute doses at the reference points and isodose curves resulted from the experimental measurement of the gels and MC simulations following the internal and external radiotherapy practices were compared. The mean absorbed doses measured with the gel in the presence of the GNPs in prostate were 15% and 8 % higher than the corresponding values without the GNPs under the internal and external radiation therapies, respectively. MC simulations also indicated a dose increase of 14 % and 7 % due to presence of the GNPs, for the same experimental internal and external radiotherapy practices, respectively. There was a good agreement between the dose enhancement factors (DEFs) estimated with MC simulations and experiment gel measurements due to the GNPs. The results indicated that the polymer gel dosimetry method as developed and used in this study, can be recommended as a reliable method for investigating the DEF of GNPs in internal and external radiotherapy practices.

  8. Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions

    NASA Astrophysics Data System (ADS)

    Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.

    2018-05-01

    Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.

  9. A review: Functional near infrared spectroscopy evaluation in muscle tissues using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Halim, A. A. A.; Laili, M. H.; Salikin, M. S.; Rusop, M.

    2018-05-01

    Monte Carlo Simulation has advanced their quantification based on number of the photon counting to solve the propagation of light inside the tissues including the absorption, scattering coefficient and act as preliminary study for functional near infrared application. The goal of this paper is to identify the optical properties using Monte Carlo simulation for non-invasive functional near infrared spectroscopy (fNIRS) evaluation of penetration depth in human muscle. This paper will describe the NIRS principle and the basis for its proposed used in Monte Carlo simulation which focused on several important parameters include ATP, ADP and relate with blow flow and oxygen content at certain exercise intensity. This will cover the advantages and limitation of such application upon this simulation. This result may help us to prove that our human muscle is transparent to this near infrared region and could deliver a lot of information regarding to the oxygenation level in human muscle. Thus, this might be useful for non-invasive technique for detecting oxygen status in muscle from living people either athletes or working people and allowing a lots of investigation muscle physiology in future.

  10. Result of Monte-Carlo simulation of electron-photon cascades in lead and layers of lead-scintillator

    NASA Technical Reports Server (NTRS)

    Wasilewski, A.; Krys, E.

    1985-01-01

    Results of Monte-Carlo simulation of electromagnetic cascade development in lead and lead-scintillator sandwiches are analyzed. It is demonstrated that the structure function for core approximation is not applicable in the case in which the primary energy is higher than 100 GeV. The simulation data has shown that introducing an inhomogeneous chamber structure results in subsequent reduction of secondary particles.

  11. Assessment of radiation exposure in dental cone-beam computerized tomography with the use of metal-oxide semiconductor field-effect transistor (MOSFET) dosimeters and Monte Carlo simulations.

    PubMed

    Koivisto, J; Kiljunen, T; Tapiovaara, M; Wolff, J; Kortesniemi, M

    2012-09-01

    The aims of this study were to assess the organ and effective dose (International Commission on Radiological Protection (ICRP) 103) resulting from dental cone-beam computerized tomography (CBCT) imaging using a novel metal-oxide semiconductor field-effect transistor (MOSFET) dosimeter device, and to assess the reliability of the MOSFET measurements by comparing the results with Monte Carlo PCXMC simulations. Organ dose measurements were performed using 20 MOSFET dosimeters that were embedded in the 8 most radiosensitive organs in the maxillofacial and neck area. The dose-area product (DAP) values attained from CBCT scans were used for PCXMC simulations. The acquired MOSFET doses were then compared with the Monte Carlo simulations. The effective dose measurements using MOSFET dosimeters yielded, using 0.5-cm steps, a value of 153 μSv and the PCXMC simulations resulted in a value of 136 μSv. The MOSFET dosimeters placed in a head phantom gave results similar to Monte Carlo simulations. Minor vertical changes in the positioning of the phantom had a substantial affect on the overall effective dose. Therefore, the MOSFET dosimeters constitute a feasible method for dose assessment of CBCT units in the maxillofacial region. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus

    NASA Astrophysics Data System (ADS)

    Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta

    2006-11-01

    The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.

  13. Monte Carlo calculations for reporting patient organ doses from interventional radiology

    NASA Astrophysics Data System (ADS)

    Huo, Wanli; Feng, Mang; Pi, Yifei; Chen, Zhi; Gao, Yiming; Xu, X. George

    2017-09-01

    This paper describes a project to generate organ dose data for the purposes of extending VirtualDose software from CT imaging to interventional radiology (IR) applications. A library of 23 mesh-based anthropometric patient phantoms were involved in Monte Carlo simulations for database calculations. Organ doses and effective doses of IR procedures with specific beam projection, filed of view (FOV) and beam quality for all parts of body were obtained. Comparing organ doses for different beam qualities, beam projections, patients' ages and patient's body mass indexes (BMIs) which generated by VirtualDose-IR, significant discrepancies were observed. For relatively long time exposure, IR doses depend on beam quality, beam direction and patient size. Therefore, VirtualDose-IR, which is based on the latest anatomically realistic patient phantoms, can generate accurate doses for IR treatment. It is suitable to apply this software in clinical IR dose management as an effective tool to estimate patient doses and optimize IR treatment plans.

  14. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  15. Contact radiotherapy using a 50 kV X-ray system: Evaluation of relative dose distribution with the Monte Carlo code PENELOPE and comparison with measurements

    NASA Astrophysics Data System (ADS)

    Croce, Olivier; Hachem, Sabet; Franchisseur, Eric; Marcié, Serge; Gérard, Jean-Pierre; Bordy, Jean-Marc

    2012-06-01

    This paper presents a dosimetric study concerning the system named "Papillon 50" used in the department of radiotherapy of the Centre Antoine-Lacassagne, Nice, France. The machine provides a 50 kVp X-ray beam, currently used to treat rectal cancers. The system can be mounted with various applicators of different diameters or shapes. These applicators can be fixed over the main rod tube of the unit in order to deliver the prescribed absorbed dose into the tumor with an optimal distribution. We have analyzed depth dose curves and dose profiles for the naked tube and for a set of three applicators. Dose measurements were made with an ionization chamber (PTW type 23342) and Gafchromic films (EBT2). We have also compared the measurements with simulations performed using the Monte Carlo code PENELOPE. Simulations were performed with a detailed geometrical description of the experimental setup and with enough statistics. Results of simulations are made in accordance with experimental measurements and provide an accurate evaluation of the dose delivered. The depths of the 50% isodose in water for the various applicators are 4.0, 6.0, 6.6 and 7.1 mm. The Monte Carlo PENELOPE simulations are in accordance with the measurements for a 50 kV X-ray system. Simulations are able to confirm the measurements provided by Gafchromic films or ionization chambers. Results also demonstrate that Monte Carlo simulations could be helpful to validate the future applicators designed for other localizations such as breast or skin cancers. Furthermore, Monte Carlo simulations could be a reliable alternative for a rapid evaluation of the dose delivered by such a system that uses multiple designs of applicators.

  16. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  17. PDF approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1993-01-01

    The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.

  18. A Fast Monte Carlo Simulation for the International Linear Collider Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furse, D.; /Georgia Tech

    2005-12-15

    The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included in the SLAC ILC group's org.lcsim package, reads in standard model or SUSY events in STDHEP file format, stochastically simulates the blurring in physics measurements caused by intrinsic detector error, and writes out an LCIO format file containing a set of final particles statistically similar to those that would have found by a full Monte Carlo simulation. In addition to the reconstructed particles themselves, descriptionsmore » of the calorimeter hit clusters and tracks that these particles would have produced are also included in the LCIO output. These output files can then be put through various analysis codes in order to characterize the effectiveness of a hypothetical detector at extracting relevant physical information about an event. Such a tool is extremely useful in preliminary detector research and development, as full simulations are extremely cumbersome and taxing on processor resources; a fast, efficient Monte Carlo can facilitate and even make possible detector physics studies that would be very impractical with the full simulation by sacrificing what is in many cases inappropriate attention to detail for valuable gains in time required for results.« less

  19. Probabilistic approach of resource assessment in Kerinci geothermal field using numerical simulation coupling with monte carlo simulation

    NASA Astrophysics Data System (ADS)

    Hidayat, Iki; Sutopo; Pratama, Heru Berian

    2017-12-01

    The Kerinci geothermal field is one phase liquid reservoir system in the Kerinci District, western part of Jambi Province. In this field, there are geothermal prospects that identified by the heat source up flow inside a National Park area. Kerinci field was planned to develop 1×55 MWe by Pertamina Geothermal Energy. To define reservoir characterization, the numerical simulation of Kerinci field is developed by using TOUGH2 software with information from conceptual model. The pressure and temperature profile well data of KRC-B1 are validated with simulation data to reach natural state condition. The result of the validation is suitable matching. Based on natural state simulation, the resource assessment of Kerinci geothermal field is estimated by using Monte Carlo simulation with the result P10-P50-P90 are 49.4 MW, 64.3 MW and 82.4 MW respectively. This paper is the first study of resource assessment that has been estimated successfully in Kerinci Geothermal Field using numerical simulation coupling with Monte carlo simulation.

  20. Massively parallelized Monte Carlo software to calculate the light propagation in arbitrarily shaped 3D turbid media

    NASA Astrophysics Data System (ADS)

    Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin

    2017-07-01

    The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.

  1. Size-dependent phase diagrams of metallic alloys: A Monte Carlo simulation study on order–disorder transitions in Pt–Rh nanoparticles

    PubMed Central

    Stahl, Christian; Albe, Karsten

    2012-01-01

    Summary Nanoparticles of Pt–Rh were studied by means of lattice-based Monte Carlo simulations with respect to the stability of ordered D022- and 40-phases as a function of particle size and composition. By thermodynamic integration in the semi-grand canonical ensemble, phase diagrams for particles with a diameter of 7.8 nm, 4.3 nm and 3.1 nm were obtained. Size-dependent trends such as the lowering of the critical ordering temperature, the broadening of the compositional stability range of the ordered phases, and the narrowing of the two-phase regions were observed and discussed in the context of complete size-dependent nanoparticle phase diagrams. In addition, an ordered surface phase emerges at low temperatures and low platinum concentration. A decrease of platinum surface segregation with increasing global platinum concentration was observed, when a second, ordered phase is formed inside the core of the particle. The order–disorder transitions were analyzed in terms of the Warren–Cowley short-range order parameters. Concentration-averaged short-range order parameters were used to remove the surface segregation bias of the conventional short-range order parameters. Using this procedure, it was shown that the short-range order in the particles at high temperatures is bulk-like. PMID:22428091

  2. Monte Carlo Analysis of Molecule Absorption Probabilities in Diffusion-Based Nanoscale Communication Systems with Multiple Receivers.

    PubMed

    Arifler, Dogu; Arifler, Dizem

    2017-04-01

    For biomedical applications of nanonetworks, employing molecular communication for information transport is advantageous over nano-electromagnetic communication: molecular communication is potentially biocompatible and inherently energy-efficient. Recently, several studies have modeled receivers in diffusion-based molecular communication systems as "perfectly monitoring" or "perfectly absorbing" spheres based on idealized descriptions of chemoreception. In this paper, we focus on perfectly absorbing receivers and present methods to improve the accuracy of simulation procedures that are used to analyze these receivers. We employ schemes available from the chemical physics and biophysics literature and outline a Monte Carlo simulation algorithm that accounts for the possibility of molecule absorption during discrete time steps, leading to a more accurate analysis of absorption probabilities. Unlike most existing studies that consider a single receiver, this paper analyzes absorption probabilities for multiple receivers deterministically or randomly deployed in a region. For random deployments, the ultimate absorption probabilities as a function of transmitter-receiver distance are shown to fit well to power laws; the exponents derived become more negative as the number of receivers increases up to a limit beyond which no additional receivers can be "packed" in the deployment region. This paper is expected to impact the design of molecular nanonetworks with multiple absorbing receivers.

  3. Use of population pharmacokinetic modeling and Monte Carlo simulation to capture individual animal variability in the prediction of flunixin withdrawal times in cattle.

    PubMed

    Wu, H; Baynes, R E; Leavens, T; Tell, L A; Riviere, J E

    2013-06-01

    The objective of this study was to develop a population pharmacokinetic (PK) model and predict tissue residues and the withdrawal interval (WDI) of flunixin in cattle. Data were pooled from published PK studies in which flunixin was administered through various dosage regimens to diverse populations of cattle. A set of liver data used to establish the regulatory label withdrawal time (WDT) also were used in this study. Compartmental models with first-order absorption and elimination were fitted to plasma and liver concentrations by a population PK modeling approach. Monte Carlo simulations were performed with the population mean and variabilities of PK parameters to predict liver concentrations of flunixin. The PK of flunixin was described best by a 3-compartment model with an extra liver compartment. The WDI estimated in this study with liver data only was the same as the label WDT. However, a longer WDI was estimated when both plasma and liver data were included in the population PK model. This study questions the use of small groups of healthy animals to determine WDTs for drugs intended for administration to large diverse populations. This may warrant a reevaluation of the current procedure for establishing WDT to prevent violative residues of flunixin. © 2012 Blackwell Publishing Ltd.

  4. Theoretical Grounds for the Propagation of Uncertainties in Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Saracco, Paolo; Pia, Maria Grazia; Batic, Matej

    2014-04-01

    We introduce a theoretical framework for the calculation of uncertainties affecting observables produced by Monte Carlo particle transport, which derive from uncertainties in physical parameters input into simulation. The theoretical developments are complemented by a heuristic application, which illustrates the method of calculation in a streamlined simulation environment.

  5. Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water

    ERIC Educational Resources Information Center

    Gergely, John Robert

    2009-01-01

    Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…

  6. Estimating Uncertainty in N2O Emissions from US Cropland Soils

    USDA-ARS?s Scientific Manuscript database

    A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...

  7. Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick

    2017-01-01

    This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…

  8. Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm

    ERIC Educational Resources Information Center

    Stewart, Wayne; Stewart, Sepideh

    2014-01-01

    For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…

  9. Monte Carlo simulation models of breeding-population advancement.

    Treesearch

    J.N. King; G.R. Johnson

    1993-01-01

    Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...

  10. Levofloxacin Penetration into Epithelial Lining Fluid as Determined by Population Pharmacokinetic Modeling and Monte Carlo Simulation

    PubMed Central

    Drusano, G. L.; Preston, S. L.; Gotfried, M. H.; Danziger, L. H.; Rodvold, K. A.

    2002-01-01

    Levofloxacin was administered orally to steady state to volunteers randomly in doses of 500 and 750 mg. Plasma and epithelial lining fluid (ELF) samples were obtained at 4, 12, and 24 h after the final dose. All data were comodeled in a population pharmacokinetic analysis employing BigNPEM. Penetration was evaluated from the population mean parameter vector values and from the results of a 1,000-subject Monte Carlo simulation. Evaluation from the population mean values demonstrated a penetration ratio (ELF/plasma) of 1.16. The Monte Carlo simulation provided a measure of dispersion, demonstrating a mean ratio of 3.18, with a median of 1.43 and a 95% confidence interval of 0.14 to 19.1. Population analysis with Monte Carlo simulation provides the best and least-biased estimate of penetration. It also demonstrates clearly that we can expect differences in penetration between patients. This analysis did not deal with inflammation, as it was performed in volunteers. The influence of lung pathology on penetration needs to be examined. PMID:11796385

  11. Geant4 hadronic physics for space radiation environment.

    PubMed

    Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L

    2012-01-01

    To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.

  12. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    NASA Astrophysics Data System (ADS)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  13. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  14. Window for Optimal Frequency Operation and Reliability of 3DEG and 2DEG Channels for Oxide Microwave MESFETs and HFETs

    DTIC Science & Technology

    2016-04-01

    noise, and energy relaxation for doped zinc-oxide and structured ZnO transistor materials with a 2-D electron gas (2DEG) channel subjected to a strong...function on the time delay. Closed symbols represent the Monte Carlo data with hot-phonon effect at different electron gas density: 1•1017 cm-3...Monte Carlo simulation is performed for electron gas density of 1•1018 cm-3. Figure 18. Monte Carlo simulation of density-dependent hot-electron energy

  15. The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.

    2014-09-15

    The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less

  16. Probabilistic calibration of the distributed hydrological model RIBS applied to real-time flood forecasting: the Harod river basin case study (Israel)

    NASA Astrophysics Data System (ADS)

    Nesti, Alice; Mediero, Luis; Garrote, Luis; Caporali, Enrica

    2010-05-01

    An automatic probabilistic calibration method for distributed rainfall-runoff models is presented. The high number of parameters in hydrologic distributed models makes special demands on the optimization procedure to estimate model parameters. With the proposed technique it is possible to reduce the complexity of calibration while maintaining adequate model predictions. The first step of the calibration procedure of the main model parameters is done manually with the aim to identify their variation range. Afterwards a Monte-Carlo technique is applied, which consists on repetitive model simulations with randomly generated parameters. The Monte Carlo Analysis Toolbox (MCAT) includes a number of analysis methods to evaluate the results of these Monte Carlo parameter sampling experiments. The study investigates the use of a global sensitivity analysis as a screening tool to reduce the parametric dimensionality of multi-objective hydrological model calibration problems, while maximizing the information extracted from hydrological response data. The method is applied to the calibration of the RIBS flood forecasting model in the Harod river basin, placed on Israel. The Harod basin has an extension of 180 km2. The catchment has a Mediterranean climate and it is mainly characterized by a desert landscape, with a soil that is able to absorb large quantities of rainfall and at the same time is capable to generate high peaks of discharge. Radar rainfall data with 6 minute temporal resolution are available as input to the model. The aim of the study is the validation of the model for real-time flood forecasting, in order to evaluate the benefits of improved precipitation forecasting within the FLASH European project.

  17. Monte Carlo and Phantom Study of the Radiation Dose to the Body from Dedicated Computed Tomography of the Breast

    PubMed Central

    Sechopoulos, Ioannis; Vedantham, Srinivasan; Suryanarayanan, Sankararaman; D’Orsi, Carl J.; Karellas, Andrew

    2008-01-01

    Purpose To prospectively determine the radiation dose absorbed by the organs and tissues of the body during a dedicated computed tomography of the breast (DBCT) study using Monte Carlo methods and a phantom. Materials and Methods Using the Geant4 Monte Carlo toolkit, the Cristy anthropomorphic phantom and the geometry of a prototype DBCT was simulated. The simulation was used to track x-rays emitted from the source until their complete absorption or exit from the simulation limits. The interactions of the x-rays with the 65 different volumes representing organs, bones and other tissues of the anthropomorphic phantom that resulted in energy deposition were recorded. These data were used to compute the radiation dose to the organs and tissues during a complete DBCT acquisition relative to the average glandular dose to the imaged breast (ROD, relative organ dose), using the x-ray spectra proposed for DBCT imaging. The effectiveness of a lead shield for reducing the dose to the organs was investigated. Results The maximum ROD among the organs was for the ipsilateral lung with a maximum of 3.25%, followed by the heart and the thymus. Of the skeletal tissues, the sternum received the highest dose with a maximum ROD to the bone marrow of 2.24%, and to the bone surface of 7.74%. The maximum ROD to the uterus, representative of that of an early-stage fetus, was 0.026%. These maxima occurred for the highest energy x-ray spectrum (80 kVp) analyzed. A lead shield does not protect substantially the organs that receive the highest dose from DBCT. Discussion Although the dose to the organs from DBCT is substantially higher than that from planar mammography, they are comparable or considerably lower than those reached by other radiographic procedures and much lower than other CT examinations. PMID:18292479

  18. Exact special twist method for quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Dagrada, Mario; Karakuzu, Seher; Vildosola, Verónica Laura; Casula, Michele; Sorella, Sandro

    2016-12-01

    We present a systematic investigation of the special twist method introduced by Rajagopal et al. [Phys. Rev. B 51, 10591 (1995), 10.1103/PhysRevB.51.10591] for reducing finite-size effects in correlated calculations of periodic extended systems with Coulomb interactions and Fermi statistics. We propose a procedure for finding special twist values which, at variance with previous applications of this method, reproduce the energy of the mean-field infinite-size limit solution within an adjustable (arbitrarily small) numerical error. This choice of the special twist is shown to be the most accurate single-twist solution for curing one-body finite-size effects in correlated calculations. For these reasons we dubbed our procedure "exact special twist" (EST). EST only needs a fully converged independent-particles or mean-field calculation within the primitive cell and a simple fit to find the special twist along a specific direction in the Brillouin zone. We first assess the performances of EST in a simple correlated model such as the three-dimensional electron gas. Afterwards, we test its efficiency within ab initio quantum Monte Carlo simulations of metallic elements of increasing complexity. We show that EST displays an overall good performance in reducing finite-size errors comparable to the widely used twist average technique but at a much lower computational cost since it involves the evaluation of just one wave function. We also demonstrate that the EST method shows similar performances in the calculation of correlation functions, such as the ionic forces for structural relaxation and the pair radial distribution function in liquid hydrogen. Our conclusions point to the usefulness of EST for correlated supercell calculations; our method will be particularly relevant when the physical problem under consideration requires large periodic cells.

  19. SU-E-T-586: Field Size Dependence of Output Factor for Uniform Scanning Proton Beams: A Comparison of TPS Calculation, Measurement and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Y; Singh, H; Islam, M

    2014-06-01

    Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less

  20. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  1. SU-F-T-657: In-Room Neutron Dose From High Energy Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christ, D; Ding, G

    Purpose: To estimate neutron dose inside the treatment room from photodisintegration events in high energy photon beams using Monte Carlo simulations and experimental measurements. Methods: The Monte Carlo code MCNP6 was used for the simulations. An Eberline ESP-1 Smart Portable Neutron Detector was used to measure neutron dose. A water phantom was centered at isocenter on the treatment couch, and the detector was placed near the phantom. A Varian 2100EX linear accelerator delivered an 18MV open field photon beam to the phantom at 400MU/min, and a camera captured the detector readings. The experimental setup was modeled in the Monte Carlomore » simulation. The source was modeled for two extreme cases: a) hemispherical photon source emitting from the target and b) cone source with an angle of the primary collimator cone. The model includes the target, primary collimator, flattening filter, secondary collimators, water phantom, detector and concrete walls. Energy deposition tallies were measured for neutrons in the detector and for photons at the center of the phantom. Results: For an 18MV beam with an open 10cm by 10cm field and the gantry at 180°, the Monte Carlo simulations predict the neutron dose in the detector to be 0.11% of the photon dose in the water phantom for case a) and 0.01% for case b). The measured neutron dose is 0.04% of the photon dose. Considering the range of neutron dose predicted by Monte Carlo simulations, the calculated results are in good agreement with measurements. Conclusion: We calculated in-room neutron dose by using Monte Carlo techniques, and the predicted neutron dose is confirmed by experimental measurements. If we remodel the source as an electron beam hitting the target for a more accurate representation of the bremsstrahlung fluence, it is feasible that the Monte Carlo simulations can be used to help in shielding designs.« less

  2. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  3. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brantley, Patrick; Dawson, Shawn; McKinley, Scott

    2016-04-20

    The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less

  4. Analytic boosted boson discrimination

    DOE PAGES

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-20

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between thesemore » limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between thesemore » limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less

  6. Determining the Statistical Power of the Kolmogorov-Smirnov and Anderson-Darling Goodness-of-Fit Tests via Monte Carlo Simulation

    DTIC Science & Technology

    2016-12-01

    KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination

  7. Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.

    PubMed

    Sheppard, C W.

    1969-03-01

    A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.

  8. Diagnosing Undersampling Biases in Monte Carlo Eigenvalue and Flux Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M.; Rearden, Bradley T.; Marshall, William J.

    2017-02-08

    Here, this study focuses on understanding the phenomena in Monte Carlo simulations known as undersampling, in which Monte Carlo tally estimates may not encounter a sufficient number of particles during each generation to obtain unbiased tally estimates. Steady-state Monte Carlo simulations were performed using the KENO Monte Carlo tools within the SCALE code system for models of several burnup credit applications with varying degrees of spatial and isotopic complexities, and the incidence and impact of undersampling on eigenvalue and flux estimates were examined. Using an inadequate number of particle histories in each generation was found to produce a maximum bias of ~100 pcm in eigenvalue estimates and biases that exceeded 10% in fuel pin flux tally estimates. Having quantified the potential magnitude of undersampling biases in eigenvalue and flux tally estimates in these systems, this study then investigated whether Markov Chain Monte Carlo convergence metrics could be integrated into Monte Carlo simulations to predict the onset and magnitude of undersampling biases. Five potential metrics for identifying undersampling biases were implemented in the SCALE code system and evaluated for their ability to predict undersampling biases by comparing the test metric scores with the observed undersampling biases. Finally, of the five convergence metrics that were investigated, three (the Heidelberger-Welch relative half-width, the Gelman-Rubin more » $$\\hat{R}_c$$ diagnostic, and tally entropy) showed the potential to accurately predict the behavior of undersampling biases in the responses examined.« less

  9. COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. R. MARTIN; F. B. BROWN

    2001-03-01

    Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Montemore » Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.« less

  10. Use of Fluka to Create Dose Calculations

    NASA Technical Reports Server (NTRS)

    Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John

    2012-01-01

    Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.

  11. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  12. Sensitivity of Force Fields on Mechanical Properties of Metals Predicted by Atomistic Simulations

    NASA Astrophysics Data System (ADS)

    Rassoulinejad-Mousavi, Seyed Moein; Zhang, Yuwen

    Increasing number of micro/nanoscale studies for scientific and engineering applications, leads to huge deployment of atomistic simulations such as molecular dynamics and Monte-Carlo simulation. Many complains from users in the simulation community arises for obtaining wrong results notwithstanding of correct simulation procedure and conditions. Improper choice of force field, known as interatomic potential is the likely causes. For the sake of users' assurance, convenience and time saving, several interatomic potentials are evaluated by molecular dynamics. Elastic properties of multiple FCC and BCC pure metallic species are obtained by LAMMPS, using different interatomic potentials designed for pure species and their alloys at different temperatures. The potentials created based on the Embedded Atom Method (EAM), Modified EAM (MEAM) and ReaX force fields, adopted from available open databases. Independent elastic stiffness constants of cubic single crystals for different metals are obtained. The results are compared with the experimental ones available in the literature and deviations for each force field are provided at each temperature. Using current work, users of these force fields can easily judge on the one they are going to designate for their problem.

  13. SU-F-T-368: Improved HPGe Detector Precise Efficiency Calibration with Monte Carlo Simulations and Radioactive Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y. John

    2016-06-15

    Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excitedmore » states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.« less

  14. Optical roughness BRDF model for reverse Monte Carlo simulation of real material thermal radiation transfer.

    PubMed

    Su, Peiran; Eri, Qitai; Wang, Qiang

    2014-04-10

    Optical roughness was introduced into the bidirectional reflectance distribution function (BRDF) model to simulate the reflectance characteristics of thermal radiation. The optical roughness BRDF model stemmed from the influence of surface roughness and wavelength on the ray reflectance calculation. This model was adopted to simulate real metal emissivity. The reverse Monte Carlo method was used to display the distribution of reflectance rays. The numerical simulations showed that the optical roughness BRDF model can calculate the wavelength effect on emissivity and simulate the real metal emissivity variance with incidence angles.

  15. Model-Free Conditional Independence Feature Screening For Ultrahigh Dimensional Data.

    PubMed

    Wang, Luheng; Liu, Jingyuan; Li, Yong; Li, Runze

    2017-03-01

    Feature screening plays an important role in ultrahigh dimensional data analysis. This paper is concerned with conditional feature screening when one is interested in detecting the association between the response and ultrahigh dimensional predictors (e.g., genetic makers) given a low-dimensional exposure variable (such as clinical variables or environmental variables). To this end, we first propose a new index to measure conditional independence, and further develop a conditional screening procedure based on the newly proposed index. We systematically study the theoretical property of the proposed procedure and establish the sure screening and ranking consistency properties under some very mild conditions. The newly proposed screening procedure enjoys some appealing properties. (a) It is model-free in that its implementation does not require a specification on the model structure; (b) it is robust to heavy-tailed distributions or outliers in both directions of response and predictors; and (c) it can deal with both feature screening and the conditional screening in a unified way. We study the finite sample performance of the proposed procedure by Monte Carlo simulations and further illustrate the proposed method through two real data examples.

  16. Inferential precision in single-case time-series data streams: how well does the em procedure perform when missing observations occur in autocorrelated data?

    PubMed

    Smith, Justin D; Borckardt, Jeffrey J; Nash, Michael R

    2012-09-01

    The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977). This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. Copyright © 2011. Published by Elsevier Ltd.

  17. Model-Free Feature Screening for Ultrahigh Dimensional Discriminant Analysis

    PubMed Central

    Cui, Hengjian; Li, Runze

    2014-01-01

    This work is concerned with marginal sure independence feature screening for ultra-high dimensional discriminant analysis. The response variable is categorical in discriminant analysis. This enables us to use conditional distribution function to construct a new index for feature screening. In this paper, we propose a marginal feature screening procedure based on empirical conditional distribution function. We establish the sure screening and ranking consistency properties for the proposed procedure without assuming any moment condition on the predictors. The proposed procedure enjoys several appealing merits. First, it is model-free in that its implementation does not require specification of a regression model. Second, it is robust to heavy-tailed distributions of predictors and the presence of potential outliers. Third, it allows the categorical response having a diverging number of classes in the order of O(nκ) with some κ ≥ 0. We assess the finite sample property of the proposed procedure by Monte Carlo simulation studies and numerical comparison. We further illustrate the proposed methodology by empirical analyses of two real-life data sets. PMID:26392643

  18. A Monte Carlo Simulation of Brownian Motion in the Freshman Laboratory

    ERIC Educational Resources Information Center

    Anger, C. D.; Prescott, J. R.

    1970-01-01

    Describes a dry- lab" experiment for the college freshman laboratory, in which the essential features of Browian motion are given principles, using the Monte Carlo technique. Calculations principles, using the Monte Carlo technique. Calculations are carried out by a computation sheme based on computer language. Bibliography. (LC)

  19. Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.

    PubMed

    Resat, H; Mezei, M

    1996-09-01

    The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations.

  20. TU-H-207A-02: Relative Importance of the Various Factors Influencing the Accuracy of Monte Carlo Simulated CT Dose Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marous, L; Muryn, J; Liptak, C

    2016-06-15

    Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less

  1. APS undulator and wiggler sources: Monte-Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, S.L.; Lai, B.; Viccaro, P.J.

    1992-02-01

    Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).

  2. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Treesearch

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  3. Monte Carlo simulations of coherent backscatter for identification of the optical coefficients of biological tissues in vivo

    NASA Astrophysics Data System (ADS)

    Eddowes, M. H.; Mills, T. N.; Delpy, D. T.

    1995-05-01

    A Monte Carlo model of light backscattered from turbid media has been used to simulate the effects of weak localization in biological tissues. A validation technique is used that implies that for the scattering and absorption coefficients and for refractive index mismatches found in tissues, the Monte Carlo method is likely to provide more accurate results than the methods previously used. The model also has the ability to simulate the effects of various illumination profiles and other laboratory-imposed conditions. A curve-fitting routine has been developed that might be used to extract the optical coefficients from the angular intensity profiles seen in experiments on turbid biological tissues, data that could be obtained in vivo.

  4. Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces

    NASA Astrophysics Data System (ADS)

    Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz

    2018-03-01

    In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.

  5. Monte Carlo source simulation technique for solution of interference reactions in INAA experiments: a preliminary report

    NASA Astrophysics Data System (ADS)

    Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.

    2004-04-01

    A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.

  6. Force field development with GOMC, a fast new Monte Carlo molecular simulation code

    NASA Astrophysics Data System (ADS)

    Mick, Jason Richard

    In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.

  7. Computationally-efficient stochastic cluster dynamics method for modeling damage accumulation in irradiated materials

    NASA Astrophysics Data System (ADS)

    Hoang, Tuan L.; Marian, Jaime; Bulatov, Vasily V.; Hosemann, Peter

    2015-11-01

    An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a procedure for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe3+, He+ and H+) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.

  8. Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?

    PubMed

    Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend

    2011-10-11

    In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.

  9. Characterizing a proton beam scanning system for Monte Carlo dose calculation in patients

    NASA Astrophysics Data System (ADS)

    Grassberger, C.; Lomax, Anthony; Paganetti, H.

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low-energy electrons (<0.6 MeV for 230 MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of-field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5 mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations.

  10. Characterizing a Proton Beam Scanning System for Monte Carlo Dose Calculation in Patients

    PubMed Central

    Grassberger, C; Lomax, Tony; Paganetti, H

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low–energy electrons (<0.6MeV for 230MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations. PMID:25549079

  11. Using Monte Carlo Simulation to Prioritize Key Maritime Environmental Impacts of Port Infrastructure

    NASA Astrophysics Data System (ADS)

    Perez Lespier, L. M.; Long, S.; Shoberg, T.

    2016-12-01

    This study creates a Monte Carlo simulation model to prioritize key indicators of environmental impacts resulting from maritime port infrastructure. Data inputs are derived from LandSat imagery, government databases, and industry reports to create the simulation. Results are validated using subject matter experts and compared with those returned from time-series regression to determine goodness of fit. The Port of Prince Rupert, Canada is used as the location for the study.

  12. Antihydrogen from positronium impact with cold antiprotons: a Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Cassidy, D. B.; Merrison, J. P.; Charlton, M.; Mitroy, J.; Ryzhikh, G.

    1999-04-01

    A Monte Carlo simulation of the reaction to form antihydrogen by positronium impact upon antiprotons has been undertaken. Total and differential cross sections have been utilized as inputs to the simulation which models the conditions foreseen in planned antihydrogen formation experiments using positrons and antiprotons held in Penning traps. Thus, predictions of antihydrogen production rates, angular distributions and the variation of the mean antihydrogen temperature as a function of incident positronium kinetic energy have been produced.

  13. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED

    2010-07-20

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less

  14. A novel algorithm for solving the true coincident counting issues in Monte Carlo simulations for radiation spectroscopy.

    PubMed

    Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A

    2015-06-01

    Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.

  15. Response Matrix Monte Carlo for electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.

    1990-11-01

    A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less

  16. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  17. Entanglement and the fermion sign problem in auxiliary field quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Broecker, Peter; Trebst, Simon

    2016-08-01

    Quantum Monte Carlo simulations of fermions are hampered by the notorious sign problem whose most striking manifestation is an exponential growth of sampling errors with the number of particles. With the sign problem known to be an NP-hard problem and any generic solution thus highly elusive, the Monte Carlo sampling of interacting many-fermion systems is commonly thought to be restricted to a small class of model systems for which a sign-free basis has been identified. Here we demonstrate that entanglement measures, in particular the so-called Rényi entropies, can intrinsically exhibit a certain robustness against the sign problem in auxiliary-field quantum Monte Carlo approaches and possibly allow for the identification of global ground-state properties via their scaling behavior even in the presence of a strong sign problem. We corroborate these findings via numerical simulations of fermionic quantum phase transitions of spinless fermions on the honeycomb lattice at and below half filling.

  18. Using simulation to determine the need for ICU beds for surgery patients.

    PubMed

    Troy, Philip Marc; Rosenberg, Lawrence

    2009-10-01

    As the need for surgical ICU beds at the hospital increases, the mismatch between demand and supply for those beds has led to the need to understand the drivers of ICU performance. A Monte Carlo simulation study of ICU performance was performed using a discrete event model that captured the events, timing, and logic of ICU patient arrivals and bed stays. The study found that functional ICU capacity, ie, the number of occupied ICU beds at which operative procedures were canceled if they were known to require an ICU stay, was the main determinant of the wait, the number performed, and the number of cancellations of operative procedures known to require an ICU stay. The study also found that actual and functional ICU capacity jointly explained ICU utilization and the mean number of patients that should have been in the ICU that were parked elsewhere. The study demonstrated the necessity of considering actual and functional ICU capacity when analyzing surgical ICU bed requirements, and suggested the need for additional research on synchronizing demand with supply. The study also reinforced the authors' sense that simulation facilitates the evaluation of trade-offs between surgical management alternatives proposed by experts and the identification of unexpected drawbacks or opportunities of those proposals.

  19. A systematic coarse-graining strategy for semi-dilute copolymer solutions: from monomers to micelles.

    PubMed

    Capone, Barbara; Coluzza, Ivan; Hansen, Jean-Pierre

    2011-05-18

    A systematic coarse-graining procedure is proposed for the description and simulation of AB diblock copolymers in selective solvents. Each block is represented by a small number, n(A) or n(B), of effective segments or blobs, containing a large number of microscopic monomers. n(A) and n(B) are unequivocally determined by imposing that blobs do not, on average, overlap, even if complete copolymer coils interpenetrate (semi-dilute regime). Ultra-soft effective interactions between blobs are determined by a rigorous inversion procedure in the low concentration limit. The methodology is applied to an athermal copolymer model where A blocks are ideal (theta solvent), B blocks self-avoiding (good solvent), while A and B blocks are mutually avoiding. The model leads to aggregation into polydisperse spherical micelles beyond a critical micellar concentration determined by Monte Carlo simulations for several size ratios f of the two blocks. The simulations also provide accurate estimates of the osmotic pressure and of the free energy of the copolymer solutions over a wide range of concentrations. The mean micellar aggregation numbers are found to be significantly lower than those predicted by an earlier, minimal two-blob representation (Capone et al 2009 J. Phys. Chem. B 113 3629).

  20. Stochastic simulation and analysis of biomolecular reaction networks

    PubMed Central

    Frazier, John M; Chushak, Yaroslav; Foy, Brent

    2009-01-01

    Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796

  1. Monte Carlo modeling of the Siemens Optifocus multileaf collimator.

    PubMed

    Laliena, Victor; García-Romero, Alejandro

    2015-05-01

    We have developed a new component module for the BEAMnrc software package, called SMLC, which models the tongue-and-groove structure of the Siemens Optifocus multileaf collimator. The ultimate goal is to perform accurate Monte Carlo simulations of the IMRT treatments carried out with Optifocus. SMLC has been validated by direct geometry checks and by comparing quantitatively the results of simulations performed with it and with the component module VARMLC. Measurements and Monte Carlo simulations of absorbed dose distributions of radiation fields sensitive to the tongue-and-groove effect have been performed to tune the free parameters of SMLC. The measurements cannot be accurately reproduced with VARMLC. Finally, simulations of a typical IMRT field showed that SMLC improves the agreement with experimental measurements with respect to VARMLC in clinically relevant cases. 87.55. K. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Three-dimensional electron microscopy simulation with the CASINO Monte Carlo software.

    PubMed

    Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique

    2011-01-01

    Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this article, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications. Copyright © 2011 Wiley Periodicals, Inc.

  3. High-Precision Monte Carlo Simulation of the Ising Models on the Penrose Lattice and the Dual Penrose Lattice

    NASA Astrophysics Data System (ADS)

    Komura, Yukihiro; Okabe, Yutaka

    2016-04-01

    We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.

  4. Three-Dimensional Electron Microscopy Simulation with the CASINO Monte Carlo Software

    PubMed Central

    Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique

    2011-01-01

    Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this paper, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications. PMID:21769885

  5. Energy deposition evaluation for ultra-low energy electron beam irradiation systems using calibrated thin radiochromic film and Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsui, S., E-mail: smatsui@gpi.ac.jp; Mori, Y.; Nonaka, T.

    2016-05-15

    For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films andmore » Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.« less

  6. Energy deposition evaluation for ultra-low energy electron beam irradiation systems using calibrated thin radiochromic film and Monte Carlo simulations.

    PubMed

    Matsui, S; Mori, Y; Nonaka, T; Hattori, T; Kasamatsu, Y; Haraguchi, D; Watanabe, Y; Uchiyama, K; Ishikawa, M

    2016-05-01

    For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films and Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.

  7. Monte Carlo Simulation of THz Multipliers

    NASA Technical Reports Server (NTRS)

    East, J.; Blakey, P.

    1997-01-01

    Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.

  8. Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, D.; Asai, M.; Brink, P.L.

    There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature,more » focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.« less

  9. Monte-Carlo background simulations of present and future detectors in x-ray astronomy

    NASA Astrophysics Data System (ADS)

    Tenzer, C.; Kendziorra, E.; Santangelo, A.

    2008-07-01

    Reaching a low-level and well understood internal instrumental background is crucial for the scientific performance of an X-ray detector and, therefore, a main objective of the instrument designers. Monte-Carlo simulations of the physics processes and interactions taking place in a space-based X-ray detector as a result of its orbital environment can be applied to explain the measured background of existing missions. They are thus an excellent tool to predict and optimize the background of future observatories. Weak points of a design and the main sources of the background can be identified and methods to reduce them can be implemented and studied within the simulations. Using the Geant4 Monte-Carlo toolkit, we have created a simulation environment for space-based detectors and we present results of such background simulations for XMM-Newton's EPIC pn-CCD camera. The environment is also currently used to estimate and optimize the background of the future instruments Simbol-X and eRosita.

  10. Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, B. S.

    1972-01-01

    A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.

  11. Determination of scattering functions and their effects on remote sensing of turbidity in natural waters

    NASA Technical Reports Server (NTRS)

    Ghovanlou, A. H.; Gupta, J. N.; Henderson, R. G.

    1977-01-01

    The development of quantitative analytical procedures for relating scattered signals, measured by a remote sensor, was considered. The applications of a Monte Carlo simulation model for radiative transfer in turbid water are discussed. The model is designed to calculate the characteristics of the backscattered signal from an illuminated body of water as a function of the turbidity level, and the spectral properties of the suspended particulates. The optical properties of the environmental waters, necessary for model applications, were derived from available experimental data and/or calculated from Mie formalism. Results of applications of the model are presented.

  12. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  13. Track reconstruction and background rejection for the Baikal neutrino telescope

    NASA Astrophysics Data System (ADS)

    Belolaptikov, I. A.; Djilkibaev, J.-A. M.; Domogatsky, G. V.; Klimushin, S. I.; Krabi, J.; Lanin, O. Ju.; Osipova, E. A.; Pavlov, A. A.; Spiering, Ch.; Wischnewski, R.

    1994-05-01

    We describe procedures to reconstruct muon tracks in the Baikal Neutrino Telescope including effective filtering of badly reconstructed events. Special attention is payed to reject downward going muons faking upward going muons from neutrino interactions. It is shown that a rejection factor of 106-as it is needed to operate a neutrino telescope at 1100 m.w.e. depth - can be obtained with a 200 PMT array. We present first results from NT-36, the 1993 array consisting of 36 PMTs. We observe satisfying agreement between Monte Carlo results and experimental data. This gives us confidence to our simulations of the full detector.

  14. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Alfonsi; C. Rabiti; D. Mandelli

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less

  16. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

  17. A Monte Carlo analysis of breast screening randomized trials.

    PubMed

    Zamora, Luis I; Forastero, Cristina; Guirado, Damián; Lallena, Antonio M

    2016-12-01

    To analyze breast screening randomized trials with a Monte Carlo simulation tool. A simulation tool previously developed to simulate breast screening programmes was adapted for that purpose. The history of women participating in the trials was simulated, including a model for survival after local treatment of invasive cancers. Distributions of time gained due to screening detection against symptomatic detection and the overall screening sensitivity were used as inputs. Several randomized controlled trials were simulated. Except for the age range of women involved, all simulations used the same population characteristics and this permitted to analyze their external validity. The relative risks obtained were compared to those quoted for the trials, whose internal validity was addressed by further investigating the reasons of the disagreements observed. The Monte Carlo simulations produce results that are in good agreement with most of the randomized trials analyzed, thus indicating their methodological quality and external validity. A reduction of the breast cancer mortality around 20% appears to be a reasonable value according to the results of the trials that are methodologically correct. Discrepancies observed with Canada I and II trials may be attributed to a low mammography quality and some methodological problems. Kopparberg trial appears to show a low methodological quality. Monte Carlo simulations are a powerful tool to investigate breast screening controlled randomized trials, helping to establish those whose results are reliable enough to be extrapolated to other populations and to design the trial strategies and, eventually, adapting them during their development. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. An Educational MONTE CARLO Simulation/Animation Program for the Cosmic Rays Muons and a Prototype Computer-Driven Hardware Display.

    ERIC Educational Resources Information Center

    Kalkanis, G.; Sarris, M. M.

    1999-01-01

    Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…

  19. Slope stability effects of fuel management strategies – inferences from Monte Carlo simulations

    Treesearch

    R. M. Rice; R. R. Ziemer; S. C. Hankin

    1982-01-01

    A simple Monte Carlo simulation evaluated the effect of several fire management strategies on soil slip erosion and wildfires. The current condition was compared to (1) a very intensive fuelbreak system without prescribed fires, and (2) prescribed fire at four time intervals with (a) current fuelbreaks and (b) intensive fuel-breaks. The intensive fuelbreak system...

  20. Use of single scatter electron monte carlo transport for medical radiation sciences

    DOEpatents

    Svatos, Michelle M.

    2001-01-01

    The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

  1. Modeling 2D and 3D diffusion.

    PubMed

    Saxton, Michael J

    2007-01-01

    Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.

  2. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Shawn A.

    The Mercury Monte Carlo particle transport code is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. In the proposed Trinity Open Science calculations, I will investigate computer science aspects of the code which are relevant to convergence of the simulation quantities with increasing Monte Carlo particle counts.

  3. On the origin of the visible light responsible for proton dose measurement using plastic optical fibers

    NASA Astrophysics Data System (ADS)

    Darafsheh, Arash; Taleei, Reza; Kassaee, Alireza; Finlay, Jarod C.

    2017-03-01

    We experimentally and by means of Monte Carlo simulations investigated the origin of the visible signal responsible for proton therapy dose measurement using bare plastic optical fibers. Experimentally, the fiber optic probe, embedded in tissue-mimicking plastics, was irradiated with a proton beam produced by a proton therapy cyclotron and the luminescence spectroscopy was performed by a CCD-coupled spectrograph to analyze the emission spectrum of the fiber tip. Monte Carlo simulations were performed using FLUKA Monte Carlo code to stochastically simulate radiation transport, ionizing radiation dose deposition, and optical emission of Čerenkov radiation. The spectroscopic study of proton-irradiated plastic fibers showed a continuous spectrum with shape different from that of Čerenkov radiation. The Monte Carlo simulations confirmed that the amount of the generated Čerenkov light does not follow the radiation absorbed dose in a medium. Our results show that the origin of the optical signal responsible for the proton dose measurement using bare optical fibers is not Čerenkov radiation. Our results point toward a connection between the scintillation of the plastic material of the fiber and the origin of the signal responsible for dose measurement.

  4. Determination of output factor for 6 MV small photon beam: comparison between Monte Carlo simulation technique and microDiamond detector

    NASA Astrophysics Data System (ADS)

    Krongkietlearts, K.; Tangboonduangjit, P.; Paisangittisakul, N.

    2016-03-01

    In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%.

  5. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons.

    PubMed

    Peterson, S W; Polf, J; Bues, M; Ciangaru, G; Archambault, L; Beddar, S; Smith, A

    2009-05-21

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.

  6. Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.

    2014-07-01

    Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.

  7. Simulation of water solutions of Ni 2+ at infinite dilution

    NASA Astrophysics Data System (ADS)

    Natália, M.; Cordeiro, D. S.; Ignaczak, Anna; Gomes, José A. N. F.

    1993-10-01

    A new ab initio pair potential is developed to describe the nickel—water interactions in Ni(II) aqueous solutions. Results of Monte Carlo simulations for the Ni(II)(H 2O) 200 system are presented for this pair potential with and without three-body classical polarization terms (the water—water interaction is described by the ab initio MCY potential). The structure of the solution around Ni(II) is discussed in terms of radial distribution functions, coordination numbers and thermal ellipsoids. The results show that the three-body terms have a non-negligible effect on the simulated solution. In fact, the experimental coordination number of six is reproduced with the full potential while a higher value is predicted when the simple pairwise-additive potential is used. The equilibrium NiO distance for the first hydration shell is also dependent on the use of the three-body terms. Comparison of our distribution functions with those obtained by neutron-diffraction experiments shows a reasonable quantitative agreement. Statistical pattern recognition analysis has also been applied to our simulations in order to better understand the local thermal motion of the water molecules around the metal ion. In this way, thermal ellipsoids have been computed (and graphically displayed) for each atom of the water molecules belonging to the Ni(II) first hydration shell. This analysis revealed that the twisting and bending motions are greater than the radial motion, and that the hydrogens have a higher mobility than the oxygens. In addition, a thermodynamic perturbation method has been incorporated in our Monte Carlo procedure in order to compute the free energy of hydration for the Ni(II) ion. Agreement between these results and the experimental ones is also sufficiently reasonable to demonstrate the feasibility of this new potential for the nickel—water interactions.

  8. Towards prediction of correlated material properties using quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas

    Correlated electron systems offer a richness of physics far beyond noninteracting systems. If we would like to pursue the dream of designer correlated materials, or, even to set a more modest goal, to explain in detail the properties and effective physics of known materials, then accurate simulation methods are required. Using modern computational resources, quantum Monte Carlo (QMC) techniques offer a way to directly simulate electron correlations. I will show some recent results on a few extremely challenging materials including the metal-insulator transition of VO2, the ground state of the doped cuprates, and the pressure dependence of magnetic properties in FeSe. By using a relatively simple implementation of QMC, at least some properties of these materials can be described truly from first principles, without any adjustable parameters. Using the QMC platform, we have developed a way of systematically deriving effective lattice models from the simulation. This procedure is particularly attractive for correlated electron systems because the QMC methods treat the one-body and many-body components of the wave function and Hamiltonian on completely equal footing. I will show some examples of using this downfolding technique and the high accuracy of QMC to connect our intuitive ideas about interacting electron systems with high fidelity simulations. The work in this presentation was supported in part by NSF DMR 1206242, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number FG02-12ER46875, and the Center for Emergent Superconductivity, Department of Energy Frontier Research Center under Grant No. DEAC0298CH1088. Computing resources were provided by a Blue Waters Illinois grant and INCITE PhotSuper and SuperMatSim allocations.

  9. A study of microkinetic adjustments required to match shock wave experiments and Monte Carlo Direct Simulation for a wide Mach number range

    NASA Technical Reports Server (NTRS)

    Pham-Van-diep, Gerald C.; Muntz, E. Phillip; Erwin, Daniel A.

    1990-01-01

    Shock wave thickness predictions from Monte Carlo Direct Simulations, using differential scattering and the Maitland-Smith-Aziz interatomic potential, underpredict experiments as shock Mach numbers increase above about 4. Examination of several sources of data has indicated that at relatively high energies the repulsive portion of accepted potentials such as the Maitland-Smith-Aziz may be too steep. An Exponential-6 potential due to Ross, based on high energy molecular beam scattering data and shock velocity measurements in liquid argon, has been combined with the lower energy portion of the Maitland-Smith-Aziz potential. When this hybrid potential is used in Monte Carlo Direct Simulations, agreement with experiments is improved over the previous predictions using the pure Maitland-Smith-Aziz form.

  10. Coupled particle-in-cell and Monte Carlo transport modeling of intense radiographic sources

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Welch, D. R.; Oliver, B. V.; Clark, R. E.; Johnson, D. L.; Maenchen, J. E.; Menge, P. R.; Olson, C. L.; Rovang, D. C.

    2002-03-01

    Dose-rate calculations for intense electron-beam diodes using particle-in-cell (PIC) simulations along with Monte Carlo electron/photon transport calculations are presented. The electromagnetic PIC simulations are used to model the dynamic operation of the rod-pinch and immersed-B diodes. These simulations include algorithms for tracking electron scattering and energy loss in dense materials. The positions and momenta of photons created in these materials are recorded and separate Monte Carlo calculations are used to transport the photons to determine the dose in far-field detectors. These combined calculations are used to determine radiographer equations (dose scaling as a function of diode current and voltage) that are compared directly with measured dose rates obtained on the SABRE generator at Sandia National Laboratories.

  11. Accurate simulations of helium pick-up experiments using a rejection-free Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Dutra, Matthew; Hinde, Robert

    2018-04-01

    In this paper, we present Monte Carlo simulations of helium droplet pick-up experiments with the intention of developing a robust and accurate theoretical approach for interpreting experimental helium droplet calorimetry data. Our approach is capable of capturing the evaporative behavior of helium droplets following dopant acquisition, allowing for a more realistic description of the pick-up process. Furthermore, we circumvent the traditional assumption of bulk helium behavior by utilizing density functional calculations of the size-dependent helium droplet chemical potential. The results of this new Monte Carlo technique are compared to commonly used Poisson pick-up statistics for simulations that reflect a broad range of experimental parameters. We conclude by offering an assessment of both of these theoretical approaches in the context of our observed results.

  12. Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.

    PubMed Central

    Resat, H; Mezei, M

    1996-01-01

    The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations. Images FIGURE 5 FIGURE 7 PMID:8873992

  13. Extending rule-based methods to model molecular geometry and 3D model resolution.

    PubMed

    Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia

    2016-08-01

    Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.

  14. Modeling and Design of GaN High Electron Mobility Transistors and Hot Electron Transistors through Monte Carlo Particle-based Device Simulations

    NASA Astrophysics Data System (ADS)

    Soligo, Riccardo

    In this work, the insight provided by our sophisticated Full Band Monte Carlo simulator is used to analyze the behavior of state-of-art devices like GaN High Electron Mobility Transistors and Hot Electron Transistors. Chapter 1 is dedicated to the description of the simulation tool used to obtain the results shown in this work. Moreover, a separate section is dedicated the set up of a procedure to validate to the tunneling algorithm recently implemented in the simulator. Chapter 2 introduces High Electron Mobility Transistors (HEMTs), state-of-art devices characterized by highly non linear transport phenomena that require the use of advanced simulation methods. The techniques for device modeling are described applied to a recent GaN-HEMT, and they are validated with experimental measurements. The main techniques characterization techniques are also described, including the original contribution provided by this work. Chapter 3 focuses on a popular technique to enhance HEMTs performance: the down-scaling of the device dimensions. In particular, this chapter is dedicated to lateral scaling and the calculation of a limiting cutoff frequency for a device of vanishing length. Finally, Chapter 4 and Chapter 5 describe the modeling of Hot Electron Transistors (HETs). The simulation approach is validated by matching the current characteristics with the experimental one before variations of the layouts are proposed to increase the current gain to values suitable for amplification. The frequency response of these layouts is calculated, and modeled by a small signal circuit. For this purpose, a method to directly calculate the capacitance is developed which provides a graphical picture of the capacitative phenomena that limit the frequency response in devices. In Chapter 5 the properties of the hot electrons are investigated for different injection energies, which are obtained by changing the layout of the emitter barrier. Moreover, the large signal characterization of the HET is shown for different layouts, where the collector barrier was scaled.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org

    Purpose: To evaluate the sensitivity of the diagnostic radiological index of protection (DRIP), used to quantify the protective value of radioprotective garments, to procedural factors in fluoroscopy in an effort to determine an appropriate set of scatter-mimicking primary beams to be used in measuring the DRIP. Methods: Monte Carlo simulations were performed to determine the shape of the scattered x-ray spectra incident on the operator in different clinical fluoroscopy scenarios, including interventional radiology and interventional cardiology (IC). Two clinical simulations studied the sensitivity of the scattered spectrum to gantry angle and patient size, while technical factors were varied according tomore » measured automatic dose rate control (ADRC) data. Factorial simulations studied the sensitivity of the scattered spectrum to gantry angle, field of view, patient size, and beam quality for constant technical factors. Average energy (E{sub avg}) was the figure of merit used to condense fluence in each energy bin to a single numerical index. Results: Beam quality had the strongest influence on the scattered spectrum in fluoroscopy. Many procedural factors affect the scattered spectrum indirectly through their effect on primary beam quality through ADRC, e.g., gantry angle and patient size. Lateral C-arm rotation, common in IC, increased the energy of the scattered spectrum, regardless of the direction of rotation. The effect of patient size on scattered radiation depended on ADRC characteristics, patient size, and procedure type. Conclusions: The scattered spectrum striking the operator in fluoroscopy is most strongly influenced by primary beam quality, particularly kV. Use cases for protective garments should be classified by typical procedural primary beam qualities, which are governed by the ADRC according to the impacts of patient size, anatomical location, and gantry angle.« less

  16. A Depolarisation Lidar Based Method for the Determination of Liquid-Cloud Microphysical Properties.

    NASA Astrophysics Data System (ADS)

    Donovan, D. P.; Klein Baltink, H.; Henzing, J. S.; De Roode, S. R.; Siebesma, P.

    2014-12-01

    The fact that polarisation lidars measure a multiple-scattering induced depolarisation signal in liquid clouds is well-known. The depolarisation signal depends on the lidar characteristics (e.g. wavelength and field-of-view) as well as the cloud properties (e.g. liquid water content (LWC) and cloud droplet number concentration (CDNC)). Previous efforts seeking to use depolarisation information in a quantitative manner to retrieve cloud properties have been undertaken with, arguably, limited practical success. In this work we present a retrieval procedure applicable to clouds with (quasi-)linear LWC profiles and (quasi-)constant CDNC in the cloud base region. Limiting the applicability of the procedure in this manner allows us to reduce the cloud variables to two parameters (namely liquid water content lapse-rate and the CDNC). This simplification, in turn, allows us to employ a robust optimal-estimation inversion using pre-computed look-up-tables produced using lidar Monte-Carlo multiple-scattering simulations. Here, we describe the theory behind the inversion procedure and apply it to simulated observations based on large-eddy simulation model output. The inversion procedure is then applied to actual depolarisation lidar data covering to a range of cases taken from the Cabauw measurement site in the central Netherlands. The lidar results were then used to predict the corresponding cloud-base region radar reflectivities. In non-drizzling condition, it was found that the lidar inversion results can be used to predict the observed radar reflectivities with an accuracy within the radar calibration uncertainty (2-3 dBZ). This result strongly supports the accuracy of the lidar inversion results. Results of a comparison between ground-based aerosol number concentration and lidar-derived CDNC are also presented. The results are seen to be consistent with previous studies based on aircraft-based in situ measurements.

  17. PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.

    1998-01-01

    PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.

  18. Development of a Monte Carlo Simulation for APD-Based PET Detectors Using a Continuous Scintillating Crystal

    NASA Astrophysics Data System (ADS)

    Clowes, P.; Mccallum, S.; Welch, A.

    2006-10-01

    We are currently developing a multilayer avalanche photodiode (APD)-based detector for use in positron emission tomography (PET), which utilizes thin continuous crystals. In this paper, we developed a Monte Carlo-based simulation to aid in the design of such detectors. We measured the performance of a detector comprising a single thin continuous crystal (3.1 mm times 9.5 mm times 9.5 mm) of lutetium yttrium ortho-silicate (LYSO) and an APD array (4times4) elements; each element 1.6 mm2 and on a 2.3 mm pitch. We showed that a spatial resolution of better than 2.12 mm is achievable throughout the crystal provided that we adopt a Statistics Based Positioning (SBP) Algorithm. We then used Monte Carlo simulation to model the behavior of the detector. The accuracy of the Monte Carlo simulation was verified by comparing measured and simulated parent datasets (PDS) for the SBP algorithm. These datasets consisted of data for point sources at 49 positions uniformly distributed over the detector area. We also calculated the noise in the detector circuit and verified this value by measurement. The noise value was included in the simulation. We show that the performance of the simulation closely matches the measured performance. The simulations were extended to investigate the effect of different noise levels on positioning accuracy. This paper showed that if modest improvements could be made in the circuit noise then positioning accuracy would be greatly improved. In summary, we have developed a model that can be used to simulate the performance of a variety of APD-based continuous crystal PET detectors

  19. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

  20. Does standard Monte Carlo give justice to instantons?

    NASA Astrophysics Data System (ADS)

    Fucito, F.; Solomon, S.

    1984-01-01

    The results of the standard local Monte Carlo are changed by offering instantons as candidates in the Metropolis procedure. We also define an O(3) topological charge with no contribution from planar dislocations. The RG behavior is still not recovered. Bantrell Fellow in Theoretical Physics.

  1. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  2. Influence of safety measures on the risks of transporting dangerous goods through road tunnels.

    PubMed

    Saccomanno, Frank; Haastrup, Palle

    2002-12-01

    Quantitative risk assessment (QRA) models are used to estimate the risks of transporting dangerous goods and to assess the merits of introducing alternative risk reduction measures for different transportation scenarios and assumptions. A comprehensive QRA model recently was developed in Europe for application to road tunnels. This model can assess the merits of a limited number of "native safety measures." In this article, we introduce a procedure for extending its scope to include the treatment of a number of important "nonnative safety measures" of interest to tunnel operators and decisionmakers. Nonnative safety measures were not included in the original model specification. The suggested procedure makes use of expert judgment and Monte Carlo simulation methods to model uncertainty in the revised risk estimates. The results of a case study application are presented that involve the risks of transporting a given volume of flammable liquid through a 10-km road tunnel.

  3. Monte Carlo Simulation of the Rapid Crystallization of Bismuth-Doped Silicon

    NASA Technical Reports Server (NTRS)

    Jackson, Kenneth A.; Gilmer, George H.; Temkin, Dmitri E.

    1995-01-01

    In this Letter we report Ising model simulations of the growth of alloys which predict quite different behavior near and far from equilibrium. Our simulations reproduce the phenomenon which has been termed 'solute trapping,' where concentrations of solute, which are far in excess of the equilibrium concentrations, are observed in the crystal after rapid crystallization. This phenomenon plays an important role in many processes which involve first order phase changes which take place under conditions far from equilibrium. The underlying physical basis for it has not been understood, but these Monte Carlo simulations provide a powerful means for investigating it.

  4. [Dosimetric evaluation of eye lense shieldings in computed tomography examination--measurements and Monte Carlo simulations].

    PubMed

    Wulff, Jorg; Keil, Boris; Auvanis, Diyala; Heverhagen, Johannes T; Klose, Klaus Jochen; Zink, Klemens

    2008-01-01

    The present study aims at the investigation of eye lens shielding of different composition for the use in computed tomography examinations. Measurements with thermo-luminescent dosimeters and a simple cylindrical waterfilled phantom were performed as well as Monte Carlo simulations with an equivalent geometry. Besides conventional shielding made of Bismuth coated latex, a new shielding with a mixture of metallic components was analyzed. This new material leads to an increased dose reduction compared to the Bismuth shielding. Measured and Monte Carlo simulated dose reductions are in good agreement and amount to 34% for the Bismuth shielding and 46% for the new material. For simulations the EGSnrc code system was used and a new application CTDOSPP was developed for the simulation of the computed tomography examination. The investigations show that a satisfying agreement between simulation and measurement with the chosen geometries of this study could only be achieved, when transport of secondary electrons was accounted for in the simulation. The amount of scattered radiation due to the protector by fluorescent photons was analyzed and is larger for the new material due to the smaller atomic number of the metallic components.

  5. E Pluribus Analysis: Applying a Superforecasting Methodology to the Detection of Homegrown Violence

    DTIC Science & Technology

    2018-03-01

    actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique, which...actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique...PREDICTING RANDOMNESS.............................................................24 1. Using a “ Runs Test” to Determine a Temporal Pattern in Lone

  6. Joint simulation of regional areas burned in Canadian forest fires: A Markov Chain Monte Carlo approach

    Treesearch

    Steen Magnussen

    2009-01-01

    Areas burned annually in 29 Canadian forest fire regions show a patchy and irregular correlation structure that significantly influences the distribution of annual totals for Canada and for groups of regions. A binary Monte Carlo Markov Chain (MCMC) is constructed for the purpose of joint simulation of regional areas burned in forest fires. For each year the MCMC...

  7. McStas 1.1: a tool for building neutron Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.

    2000-03-01

    McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.

  8. Estimation of whole-body radiation exposure from brachytherapy for oral cancer using a Monte Carlo simulation

    PubMed Central

    Ozaki, Y.; Kaida, A.; Miura, M.; Nakagawa, K.; Toda, K.; Yoshimura, R.; Sumi, Y.; Kurabayashi, T.

    2017-01-01

    Abstract Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192Ir hairpins and 198Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192Ir hairpins and 198Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained. PMID:28339846

  9. Comparison of Three Methods of Calculation, Experimental and Monte Carlo Simulation in Investigation of Organ Doses (Thyroid, Sternum, Cervical Vertebra) in Radioiodine Therapy

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Ayat, Saba

    2012-01-01

    Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered 131I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD. PMID:23717806

  10. DNA strand breaks induced by electrons simulated with Nanodosimetry Monte Carlo Simulation Code: NASIC.

    PubMed

    Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong

    2015-09-01

    The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Monte Carlo simulations of precise timekeeping in the Milstar communication satellite system

    NASA Technical Reports Server (NTRS)

    Camparo, James C.; Frueholz, R. P.

    1995-01-01

    The Milstar communications satellite system will provide secure antijam communication capabilities for DOD operations into the next century. In order to accomplish this task, the Milstar system will employ precise timekeeping on its satellites and at its ground control stations. The constellation will consist of four satellites in geosynchronous orbit, each carrying a set of four rubidium (Rb) atomic clocks. Several times a day, during normal operation, the Mission Control Element (MCE) will collect timing information from the constellation, and after several days use this information to update the time and frequency of the satellite clocks. The MCE will maintain precise time with a cesium (Cs) atomic clock, synchronized to UTC(USNO) via a GPS receiver. We have developed a Monte Carlo simulation of Milstar's space segment timekeeping. The simulation includes the effects of: uplink/downlink time transfer noise; satellite crosslink time transfer noise; satellite diurnal temperature variations; satellite and ground station atomic clock noise; and also quantization limits regarding satellite time and frequency corrections. The Monte Carlo simulation capability has proven to be an invaluable tool in assessing the performance characteristics of various timekeeping algorithms proposed for Milstar, and also in highlighting the timekeeping capabilities of the system. Here, we provide a brief overview of the basic Milstar timekeeping architecture as it is presently envisioned. We then describe the Monte Carlo simulation of space segment timekeeping, and provide examples of the simulation's efficacy in resolving timekeeping issues.

  12. Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Chao-Jen, E-mail: cjlai3711@gmail.com; Zhong, Yuncheng; Yi, Ying

    2015-06-15

    Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimatemore » average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical measurements were 0.97 ± 0.03 and 1.10 ± 0.13, respectively, indicating that the accuracy of the Monte Carlo simulation was adequate. The normalized AGD with VOI field scans was substantially reduced by a factor of about 2 over the VOI region and by a factor of 18 over the entire breast for both 25% and 50% VGF simulated breasts compared with the normalized AGD with full field scans. The normalized AGD for the VOI breast CT technique can be kept the same as or lower than that for a full field scan with the exposure level for the VOI field scan increased by a factor of as much as 12. Conclusions: The authors’ Monte Carlo estimates of normalized AGDs for the VOI breast CT technique show that this technique can be used to markedly increase the dose to the breast and thus the visibility of the VOI region without increasing the dose to the breast. The results of this investigation should be helpful for those interested in using VOI breast CT technique to image small calcifications with dose concern.« less

  13. Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study

    PubMed Central

    Lai, Chao-Jen; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2015-01-01

    Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm2 field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical measurements were 0.97 ± 0.03 and 1.10 ± 0.13, respectively, indicating that the accuracy of the Monte Carlo simulation was adequate. The normalized AGD with VOI field scans was substantially reduced by a factor of about 2 over the VOI region and by a factor of 18 over the entire breast for both 25% and 50% VGF simulated breasts compared with the normalized AGD with full field scans. The normalized AGD for the VOI breast CT technique can be kept the same as or lower than that for a full field scan with the exposure level for the VOI field scan increased by a factor of as much as 12. Conclusions: The authors’ Monte Carlo estimates of normalized AGDs for the VOI breast CT technique show that this technique can be used to markedly increase the dose to the breast and thus the visibility of the VOI region without increasing the dose to the breast. The results of this investigation should be helpful for those interested in using VOI breast CT technique to image small calcifications with dose concern. PMID:26127058

  14. The proton therapy nozzles at Samsung Medical Center: A Monte Carlo simulation study using TOPAS

    NASA Astrophysics Data System (ADS)

    Chung, Kwangzoo; Kim, Jinsung; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih

    2015-07-01

    To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles by using TOol for PArticle Simulation (TOPAS). At SMC proton therapy center, we have two gantry rooms with different types of nozzles: a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, the novel features of TOPAS, such as the time feature or the ridge filter class, have been used, and the appropriate physics models for proton nozzle simulation have been defined. Dosimetric properties, like percent depth dose curve, spreadout Bragg peak (SOBP), and beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported radiotherapy (RT) plan from the TPS is interpreted by using an interface and is then translated into the TOPAS input text. The developed Monte Carlo nozzle model can be used to estimate the non-beam performance, such as the neutron background, of the nozzles. Furthermore, the nozzle model can be used to study the mechanical optimization of the design of the nozzle.

  15. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    NASA Astrophysics Data System (ADS)

    Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.

    2017-11-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.

  16. Risk assessment predictions of open dumping area after closure using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Pauzi, Nur Irfah Mohd; Radhi, Mohd Shahril Mat; Omar, Husaini

    2017-10-01

    Currently, there are many abandoned open dumping areas that were left without any proper mitigation measures. These open dumping areas could pose serious hazard to human and pollute the environment. The objective of this paper is to determine the risk assessment at the open dumping area after they has been closed using Monte Carlo Simulation method. The risk assessment exercise is conducted at the Kuala Lumpur dumping area. The rapid urbanisation of Kuala Lumpur coupled with increase in population lead to increase in waste generation. It leads to more dumping/landfill area in Kuala Lumpur. The first stage of this study involve the assessment of the dumping area and samples collections. It followed by measurement of settlement of dumping area using oedometer. The risk of the settlement is predicted using Monte Carlo simulation method. Monte Carlo simulation calculates the risk and the long-term settlement. The model simulation result shows that risk level of the Kuala Lumpur open dumping area ranges between Level III to Level IV i.e. between medium risk to high risk. These settlement (ΔH) is between 3 meters to 7 meters. Since the risk is between medium to high, it requires mitigation measures such as replacing the top waste soil with new sandy gravel soil. This will increase the strength of the soil and reduce the settlement.

  17. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  18. Modulated phases in a three-dimensional Maier-Saupe model with competing interactions

    NASA Astrophysics Data System (ADS)

    Bienzobaz, P. F.; Xu, Na; Sandvik, Anders W.

    2017-07-01

    This work is dedicated to the study of the discrete version of the Maier-Saupe model in the presence of competing interactions. The competition between interactions favoring different orientational ordering produces a rich phase diagram including modulated phases. Using a mean-field approach and Monte Carlo simulations, we show that the proposed model exhibits isotropic and nematic phases and also a series of modulated phases that meet at a multicritical point, a Lifshitz point. Though the Monte Carlo and mean-field phase diagrams show some quantitative disagreements, the Monte Carlo simulations corroborate the general behavior found within the mean-field approximation.

  19. Monte Carlo simulation of proton track structure in biological matter

    DOE PAGES

    Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.; ...

    2017-05-25

    Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less

  20. Monte Carlo simulation of proton track structure in biological matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.

    Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less

  1. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  2. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  3. Dynamic 99mTc-MAG3 renography: images for quality control obtained by combining pharmacokinetic modelling, an anthropomorphic computer phantom and Monte Carlo simulated scintillation camera imaging

    NASA Astrophysics Data System (ADS)

    Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael

    2013-05-01

    In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.

  4. Monte Carlo simulations of the X Y vectorial Blume-Emery-Griffiths model in multilayer films for 3He-4He mixtures

    NASA Astrophysics Data System (ADS)

    Santos-Filho, J. B.; Plascak, J. A.

    2017-09-01

    The X Y vectorial generalization of the Blume-Emery-Griffiths (X Y -VBEG) model, which is suitable to be applied to the study of 3He-4He mixtures, is treated on thin films structure and its thermodynamical properties are analyzed as a function of the film thickness. We employ extensive and up-to-date Monte Carlo simulations consisting of hybrid algorithms combining lattice-gas moves, Metropolis, Wolff, and super-relaxation procedures to overcome the critical slowing down and correlations among different spin configurations of the system. We also make use of single histogram techniques to get the behavior of the thermodynamical quantities close to the corresponding transition temperatures. Thin films of the X Y -VBEG model present a quite rich phase diagram with Berezinskii-Kosterlitz-Thouless (BKT) transitions, BKT endpoints, and isolated critical points. As one varies the impurity concentrations along the layers, and in the limit of infinite film thickness, there is a coalescence of the BKT transition endpoint and the isolated critical point into a single, unique tricritical point. In addition, when mimicking the behavior of thin films of 3He-4He mixtures, one obtains that the concentration of 3He atoms decreases from the outer layers to the inner layers of the film, meaning that the superfluid particles tend to locate in the bulk of the system.

  5. Global minimum-energy structure and spectroscopic properties of I2(*-) x n H2O clusters: a Monte Carlo simulated annealing study.

    PubMed

    Pathak, Arup Kumar; Mukherjee, Tulsi; Maity, Dilip Kumar

    2010-01-18

    The vibrational (IR and Raman) and photoelectron spectral properties of hydrated iodine-dimer radical-anion clusters, I(2)(*-) x n H(2)O (n=1-10), are presented. Several initial guess structures are considered for each size of cluster to locate the global minimum-energy structure by applying a Monte Carlo simulated annealing procedure including spin-orbit interaction. In the Raman spectrum, hydration reduces the intensity of the I-I stretching band but enhances the intensity of the O-H stretching band of water. Raman spectra of more highly hydrated clusters appear to be simpler than the corresponding IR spectra. Vibrational bands due to simultaneous stretching vibrations of O-H bonds in a cyclic water network are observed for I(2)(*-) x n H(2)O clusters with n > or = 3. The vertical detachment energy (VDE) profile shows stepwise saturation that indicates closing of the geometrical shell in the hydrated clusters on addition of every four water molecules. The calculated VDE of finite-size small hydrated clusters is extrapolated to evaluate the bulk VDE value of I(2)(*-) in aqueous solution as 7.6 eV at the CCSD(T) level of theory. Structure and spectroscopic properties of these hydrated clusters are compared with those of hydrated clusters of Cl(2)(*-) and Br(2)(*-).

  6. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    NASA Astrophysics Data System (ADS)

    España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M

    2009-03-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  7. A novel Kinetic Monte Carlo algorithm for Non-Equilibrium Simulations

    NASA Astrophysics Data System (ADS)

    Jha, Prateek; Kuzovkov, Vladimir; Grzybowski, Bartosz; Olvera de La Cruz, Monica

    2012-02-01

    We have developed an off-lattice kinetic Monte Carlo simulation scheme for reaction-diffusion problems in soft matter systems. The definition of transition probabilities in the Monte Carlo scheme are taken identical to the transition rates in a renormalized master equation of the diffusion process and match that of the Glauber dynamics of Ising model. Our scheme provides several advantages over the Brownian dynamics technique for non-equilibrium simulations. Since particle displacements are accepted/rejected in a Monte Carlo fashion as opposed to moving particles following a stochastic equation of motion, nonphysical movements (e.g., violation of a hard core assumption) are not possible (these moves have zero acceptance). Further, the absence of a stochastic ``noise'' term resolves the computational difficulties associated with generating statistically independent trajectories with definitive mean properties. Finally, since the timestep is independent of the magnitude of the interaction forces, much longer time-steps can be employed than Brownian dynamics. We discuss the applications of this scheme for dynamic self-assembly of photo-switchable nanoparticles and dynamical problems in polymeric systems.

  8. Studies of Low Luminosity Active Galactic Nuclei with Monte Carlo and Magnetohydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Hilburn, Guy Louis

    Results from several studies are presented which detail explorations of the physical and spectral properties of low luminosity active galactic nuclei. An initial Sagittarius A* general relativistic magnetohydrodynamic simulation and Monte Carlo radiation transport model suggests accretion rate changes as the dominant flaring method. A similar study on M87 introduces new methods to the Monte Carlo model for increased consistency in highly energetic sources. Again, accretion rate variation seems most appropriate to explain spectral transients. To more closely resolve the methods of particle energization in active galactic nuclei accretion disks, a series of localized shearing box simulations explores the effect of numerical resolution on the development of current sheets. A particular focus on numerically describing converged current sheet formation will provide new methods for consideration of turbulence in accretion disks.

  9. Photon migration through a turbid slab described by a model based on diffusion approximation. II. Comparison with Monte Carlo results.

    PubMed

    Martelli, F; Contini, D; Taddeucci, A; Zaccanti, G

    1997-07-01

    In our companion paper we presented a model to describe photon migration through a diffusing slab. The model, developed for a homogeneous slab, is based on the diffusion approximation and is able to take into account reflection at the boundaries resulting from the refractive index mismatch. In this paper the predictions of the model are compared with solutions of the radiative transfer equation obtained by Monte Carlo simulations in order to determine the applicability limits of the approximated theory in different physical conditions. A fitting procedure, carried out with the optical properties as fitting parameters, is used to check the application of the model to the inverse problem. The results show that significant errors can be made if the effect of the refractive index mismatch is not properly taken into account. Errors are more important when measurements of transmittance are used. The effects of using a receiver with a limited angular field of view and the angular distribution of the radiation that emerges from the slab have also been investigated.

  10. Addition of luminescence process in Monte Carlo simulation to precisely estimate the light emitted from water during proton and carbon-ion irradiation.

    PubMed

    Yabe, Takuya; Sasano, Makoto; Hirano, Yoshiyuki; Toshito, Toshiyuki; Akagi, Takashi; Yamashita, Tomohiro; Hayashi, Masateru; Azuma, Tetsushi; Sakamoto, Yusuku; Komori, Masataka; Yamamoto, Seiichi

    2018-06-20

    Although luminescence of water lower in energy than the Cerenkov-light threshold during proton and carbon-ion irradiation has been found, the phenomenon has not yet been implemented for Monte Carlo simulations. The results provided by the simulations lead to misunderstandings of the physical phenomenon in optical imaging of water during proton and carbon-ion irradiation. To solve the problems, as well as to clarify the light production of the luminescence of water, we modified a Monte Carlo simulation code to include the light production from the luminescence of water and compared them with the experimental results of luminescence imaging of water. We used GEANT4 for the simulation of emitted light from water during proton and carbon-ion irradiation. We used the light production from the luminescence of water using the scintillation process in GEANT4 while those of Cerenkov light from the secondary electrons and prompt gamma photons in water were also included in the simulation. The modified simulation results showed similar depth profiles to those of the measured data for both proton and carbon-ion. When the light production of 0.1 photons/MeV was used for the luminescence of water in the simulation, the simulated depth profiles showed the best match to those of the measured results for both the proton and carbon-ion compared with those used for smaller and larger numbers of photons/MeV. We could successively obtain the simulated depth profiles that were basically the same as the experimental data by using GEANT4 when we assumed the light production by the luminescence of water. Our results confirmed that the inclusion of the luminescence of water in Monte Carlo simulation is indispensable to calculate the precise light distribution in water during irradiation of proton and carbon-ion.

  11. Monte Carlo simulations versus experimental measurements in a small animal PET system. A comparison in the NEMA NU 4-2008 framework

    NASA Astrophysics Data System (ADS)

    Popota, F. D.; Aguiar, P.; España, S.; Lois, C.; Udias, J. M.; Ros, D.; Pavia, J.; Gispert, J. D.

    2015-01-01

    In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system’s sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system’s dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.

  12. Monte Carlo simulations versus experimental measurements in a small animal PET system. A comparison in the NEMA NU 4-2008 framework.

    PubMed

    Popota, F D; Aguiar, P; España, S; Lois, C; Udias, J M; Ros, D; Pavia, J; Gispert, J D

    2015-01-07

    In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system's sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system's dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.

  13. A Monte-Carlo maplet for the study of the optical properties of biological tissues

    NASA Astrophysics Data System (ADS)

    Yip, Man Ho; Carvalho, M. J.

    2007-12-01

    Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.

  14. Role of Boundary Conditions in Monte Carlo Simulation of MEMS Devices

    NASA Technical Reports Server (NTRS)

    Nance, Robert P.; Hash, David B.; Hassan, H. A.

    1997-01-01

    A study is made of the issues surrounding prediction of microchannel flows using the direct simulation Monte Carlo method. This investigation includes the introduction and use of new inflow and outflow boundary conditions suitable for subsonic flows. A series of test simulations for a moderate-size microchannel indicates that a high degree of grid under-resolution in the streamwise direction may be tolerated without loss of accuracy. In addition, the results demonstrate the importance of physically correct boundary conditions, as well as possibilities for reducing the time associated with the transient phase of a simulation. These results imply that simulations of longer ducts may be more feasible than previously envisioned.

  15. Benchmarking polarizable molecular dynamics simulations of aqueous sodium hydroxide by diffraction measurements.

    PubMed

    Vácha, Robert; Megyes, Tunde; Bakó, Imre; Pusztai, László; Jungwirth, Pavel

    2009-04-23

    Results from molecular dynamics simulations of aqueous hydroxide of varying concentrations have been compared with experimental structural data. First, the polarizable POL3 model was verified against neutron scattering using a reverse Monte Carlo fitting procedure. It was found to be competitive with other simple water models and well suited for combining with hydroxide ions. Second, a set of four polarizable models of OH- were developed by fitting against accurate ab initio calculations for small hydroxide-water clusters. All of these models were found to provide similar results that robustly agree with structural data from X-ray scattering. The present force field thus represents a significant improvement over previously tested nonpolarizable potentials. Although it cannot in principle capture proton hopping and can only approximately describe the charge delocalization within the immediate solvent shell around OH-, it provides structural data that are almost entirely consistent with data obtained from scattering experiments.

  16. The distributed production system of the SuperB project: description and results

    NASA Astrophysics Data System (ADS)

    Brown, D.; Corvo, M.; Di Simone, A.; Fella, A.; Luppi, E.; Paoloni, E.; Stroili, R.; Tomassetti, L.

    2011-12-01

    The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.

  17. Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds

    NASA Astrophysics Data System (ADS)

    Wesemann, Stefan; Marzetta, Thomas L.

    2017-12-01

    For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.

  18. Binding energies and modelling of nuclei in semiclassical simulations

    NASA Astrophysics Data System (ADS)

    Pérez-García, M. Ángeles; Tsushima, K.; Valcarce, A.

    2008-03-01

    We study the binding energies of spin isospin saturated nuclei with nucleon number 8⩽A⩽100 in semiclassical Monte Carlo many-body simulations. The model Hamiltonian consists of (i) nucleon kinetic energy, (ii) a nucleon nucleon interaction potential, and (iii) an effective Pauli potential which depends on density. The basic ingredients of the nucleon nucleon potential are a short-range repulsion, and a medium-range attraction. Our results demonstrate that one can always expect to obtain the empirical binding energies for a set of nuclei by introducing a proper density dependent Pauli potential in terms of a single variable, the nucleon number, A. The present work shows that in the suggested procedure there is a delicate counterbalance of kinetic and potential energetic contributions allowing a good reproduction of the experimental nuclear binding energies. This type of calculations may be of interest in further reproduction of other properties of nuclei such as radii and also exotic nuclei.

  19. Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation.

    PubMed

    Garcia, Alejandro L; Wagner, Wolfgang

    2003-11-01

    In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    ALAM,TODD M.

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

Top