NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
NASA Astrophysics Data System (ADS)
Darafsheh, Arash; Taleei, Reza; Kassaee, Alireza; Finlay, Jarod C.
2017-03-01
We experimentally and by means of Monte Carlo simulations investigated the origin of the visible signal responsible for proton therapy dose measurement using bare plastic optical fibers. Experimentally, the fiber optic probe, embedded in tissue-mimicking plastics, was irradiated with a proton beam produced by a proton therapy cyclotron and the luminescence spectroscopy was performed by a CCD-coupled spectrograph to analyze the emission spectrum of the fiber tip. Monte Carlo simulations were performed using FLUKA Monte Carlo code to stochastically simulate radiation transport, ionizing radiation dose deposition, and optical emission of Čerenkov radiation. The spectroscopic study of proton-irradiated plastic fibers showed a continuous spectrum with shape different from that of Čerenkov radiation. The Monte Carlo simulations confirmed that the amount of the generated Čerenkov light does not follow the radiation absorbed dose in a medium. Our results show that the origin of the optical signal responsible for the proton dose measurement using bare optical fibers is not Čerenkov radiation. Our results point toward a connection between the scintillation of the plastic material of the fiber and the origin of the signal responsible for dose measurement.
NASA Astrophysics Data System (ADS)
Chatterjee, S.; Bakshi, A. K.; Tripathy, S. P.
2010-09-01
Response matrix for CaSO 4:Dy based neutron dosimeter was generated using Monte Carlo code FLUKA in the energy range thermal to 20 MeV for a set of eight Bonner spheres of diameter 3-12″ including the bare one. Response of the neutron dosimeter was measured for the above set of spheres for 241Am-Be neutron source covered with 2 mm lead. An analytical expression for the response function was devised as a function of sphere mass. Using Frascati Unfolding Iteration Tool (FRUIT) unfolding code, the neutron spectrum of 241Am-Be was unfolded and compared with standard IAEA spectrum for the same.
Diagnosing Undersampling Biases in Monte Carlo Eigenvalue and Flux Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M.; Rearden, Bradley T.; Marshall, William J.
2017-02-08
Here, this study focuses on understanding the phenomena in Monte Carlo simulations known as undersampling, in which Monte Carlo tally estimates may not encounter a sufficient number of particles during each generation to obtain unbiased tally estimates. Steady-state Monte Carlo simulations were performed using the KENO Monte Carlo tools within the SCALE code system for models of several burnup credit applications with varying degrees of spatial and isotopic complexities, and the incidence and impact of undersampling on eigenvalue and flux estimates were examined. Using an inadequate number of particle histories in each generation was found to produce a maximum bias of ~100 pcm in eigenvalue estimates and biases that exceeded 10% in fuel pin flux tally estimates. Having quantified the potential magnitude of undersampling biases in eigenvalue and flux tally estimates in these systems, this study then investigated whether Markov Chain Monte Carlo convergence metrics could be integrated into Monte Carlo simulations to predict the onset and magnitude of undersampling biases. Five potential metrics for identifying undersampling biases were implemented in the SCALE code system and evaluated for their ability to predict undersampling biases by comparing the test metric scores with the observed undersampling biases. Finally, of the five convergence metrics that were investigated, three (the Heidelberger-Welch relative half-width, the Gelman-Rubin more » $$\\hat{R}_c$$ diagnostic, and tally entropy) showed the potential to accurately predict the behavior of undersampling biases in the responses examined.« less
Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
NASA Astrophysics Data System (ADS)
Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.
2015-12-01
The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.
Jones, Matthew L; Dyer, Reesha; Clarke, Nigel; Groves, Chris
2014-10-14
Kinetic Monte Carlo simulations are used to examine the effect of high-energy, 'hot' delocalised charge transfer (HCT) states for donor:acceptor and mixed:aggregate blends, the latter relating to polymer:fullerene photovoltaic devices. Increased fullerene aggregation is shown to enhance charge generation and short-circuit device current - largely due to the increased production of HCT states at the aggregate interface. However, the instances where HCT states are predicted to give internal quantum efficiencies in the region of 50% do not correspond to HCT delocalisation or electron mobility measured in experiments. These data therefore suggest that HCT states are not the primary cause of high quantum efficiencies in some polymer:fullerene OPVs. Instead it is argued that HCT states are responsible for the fast charge generation seen in spectroscopy, but that regional variation in energy levels are the cause of long-term, efficient free-charge generation.
Characteristic evaluation of a Lithium-6 loaded neutron coincidence spectrometer.
Hayashi, M; Kaku, D; Watanabe, Y; Sagara, K
2007-01-01
Characteristics of a (6)Li-loaded neutron coincidence spectrometer were investigated from both measurements and Monte Carlo simulations. The spectrometer consists of three (6)Li-glass scintillators embedded in a liquid organic scintillator BC-501A, which can detect selectively neutrons that deposit the total energy in the BC-501A using a coincidence signal generated from the capture event of thermalised neutrons in the (6)Li-glass scintillators. The relative efficiency and the energy response were measured using 4.7, 7.2 and 9.0 MeV monoenergetic neutrons. The measured ones were compared with the Monte Carlo calculations performed by combining the neutron transport code PHITS and the scintillator response calculation code SCINFUL. The experimental light output spectra were in good agreement with the calculated ones in shape. The energy dependence of the detection efficiency was reproduced by the calculation. The response matrices for 1-10 MeV neutrons were finally obtained.
Proposal of a method for evaluating tsunami risk using response-surface methodology
NASA Astrophysics Data System (ADS)
Fukutani, Y.
2017-12-01
Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response-surface and Monte Carlo simulation without conducting multiple tsunami numerical simulations.
MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.E.; Baker, M.C.
1999-07-25
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less
Modeling human tracking error in several different anti-tank systems
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1981-01-01
An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.
Evaluating average and atypical response in radiation effects simulations
NASA Astrophysics Data System (ADS)
Weller, R. A.; Sternberg, A. L.; Massengill, L. W.; Schrimpf, R. D.; Fleetwood, D. M.
2003-12-01
We examine the limits of performing single-event simulations using pre-averaged radiation events. Geant4 simulations show the necessity, for future devices, to supplement current methods with ensemble averaging of device-level responses to physically realistic radiation events. Initial Monte Carlo simulations have generated a significant number of extremal events in local energy deposition. These simulations strongly suggest that proton strikes of sufficient energy, even those that initiate purely electronic interactions, can initiate device response capable in principle of producing single event upset or microdose damage in highly scaled devices.
Physical Principle for Generation of Randomness
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)
Verification of unfold error estimates in the UFO code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less
A Monte Carlo method using octree structure in photon and electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, K.; Maeda, S.
Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that withmore » electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzidakis, Stylianos; Greulich, Christopher
A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.
Quantum interference and Monte Carlo simulations of multiparticle production
NASA Astrophysics Data System (ADS)
Bialas, A.; Krzywicki, A.
1995-02-01
We show that the effects of quantum interference can be implemented in Monte Carlo generators by modelling the generalized Wigner functions. A specific prescription for an appropriate modification of the weights of events produced by standard generators is proposed.
Response of LaBr3(Ce) scintillators to 2.5 MeV fusion neutrons.
Cazzaniga, C; Nocente, M; Tardocchi, M; Croci, G; Giacomelli, L; Angelone, M; Pillon, M; Villari, S; Weller, A; Petrizzi, L; Gorini, G
2013-12-01
Measurements of the response of LaBr3(Ce) to 2.5 MeV neutrons have been carried out at the Frascati Neutron Generator and at tokamak facilities with deuterium plasmas. The observed spectrum has been interpreted by means of a Monte Carlo model. It is found that the main contributor to the measured response is neutron inelastic scattering on (79)Br, (81)Br, and (139)La. An extrapolation of the count rate response to 14 MeV neutrons from deuterium-tritium plasmas is also presented. The results are of relevance for the design of γ-ray diagnostics of fusion burning plasmas.
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
PEPSI — a Monte Carlo generator for polarized leptoproduction
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
EGRET High Energy Capability and Multiwavelength Flare Studies and Solar Flare Proton Spectra
NASA Technical Reports Server (NTRS)
Chupp, Edward L.
1997-01-01
UNH was assigned the responsibility to use their accelerator neutron measurements to verify the TASC response function and to modify the TASC fitting program to include a high energy neutron contribution. Direct accelerator-based measurements by UNH of the energy-dependent efficiencies for detecting neutrons with energies from 36 to 720 MeV in NaI were compared with Monte Carlo TASC calculations. The calculated TASC efficiencies are somewhat lower (by about 20%) than the accelerator results in the energy range 70-300 MeV. The measured energy-loss spectrum for 207 MeV neutron interactions in NaI were compared with the Monte Carlo response for 200 MeV neutrons in the TASC indicating good agreement. Based on this agreement, the simulation was considered to be sufficiently accurate to generate a neutron response library to be used by UNH in modifying the TASC fitting program to include a neutron component in the flare spectrum modeling. TASC energy-loss data on the 1991 June 11 flare was transferred to UNH. Also included appendix: Gamma-rays and neutrons as a probe of flare proton spectra: the solar flare of 11 June 1991.
Evaluation and comparison of absorbed dose for electron beams by LiF and diamond dosimeters
NASA Astrophysics Data System (ADS)
Mosia, G. J.; Chamberlain, A. C.
2007-09-01
The absorbed dose response of LiF and diamond thermoluminescent dosimeters (TLDs), calibrated in 60Co γ-rays, has been determined using the MCNP4B Monte Carlo code system in mono-energetic megavoltage electron beams from 5 to 20 MeV. Evaluation of the dose responses was done against the dose responses of published works by other investigators. Dose responses of both dosimeters were compared to establish if any relation exists between them. The dosimeters were irradiated in a water phantom with the centre of their top surfaces (0.32×0.32 cm 2), placed at dmax perpendicular to the radiation beam on the central axis. For LiF TLD, dose responses ranged from 0.945±0.017 to 0.997±0.011. For the diamond TLD, the dose response ranged from 0.940±0.017 to 1.018±0.011. To correct for dose responses by both dosimeters, energy correction factors were generated from dose response results of both TLDs. For LiF TLD, these correction factors ranged from 1.003 up to 1.058 and for diamond TLD the factors ranged from 0.982 up to 1.064. The results show that diamond TLDs can be used in the place of the well-established LiF TLDs and that Monte Carlo code systems can be used in dose determinations for radiotherapy treatment planning.
NASA Astrophysics Data System (ADS)
Brusch, Michael; Baier, Daniel
The usage and the estimation of price response function is very important for strategic marketing decisions. Typically price response functions with an empirical basis are used. However, such price response functions are subject to a lot of disturbing influence factors, e.g., the assumed profit maximum price and the assumed corresponding quantity of sales. In such cases, the question how stable the found price response function is was not answered sufficiently up to now. In this paper, the question will be pursued how much (and what kind of) errors in market research are pardonable for a stable price response function. For the comparisons, a factorial design with synthetically generated and disturbed data is used.
Electromagnetic and neutral-weak response functions of light nuclei
NASA Astrophysics Data System (ADS)
Lovato, Alessandro
2015-10-01
A major goal of nuclear theory is to understand the strong interaction in nuclei as it manifests itself in terms of two- and many-body forces among the nuclear constituents, the protons and neutrons, and the interactions of these constituents with external electroweak probes via one- and many-body currents. Using imaginary-time projection technique, quantum Monte Carlo allows for solving the time-independent Schrödinger equation even for Hamiltonians including highly spin-isospin dependent two- and three- body forces. I will present a recent Green's function Monte Carlo calculation of the quasi-elastic electroweak response functions in light nuclei, needed to describe electron and neutrino scattering. We found that meson-exchange two-body currents generate excess transverse strength from threshold to the quasielastic to the dip region and beyond. These results challenge the conventional picture of quasi elastic inclusive scattering as being largely dominated by single-nucleon knockout processes. These findings are of particular interest for the interpretation of neutrino oscillation signals.
Bergaoui, K; Reguigui, N; Gary, C K; Brown, C; Cremer, J T; Vainionpaa, J H; Piestrup, M A
2014-12-01
An explosive detection system based on a Deuterium-Deuterium (D-D) neutron generator has been simulated using the Monte Carlo N-Particle Transport Code (MCNP5). Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma emission (10.82MeV) following radiative neutron capture by (14)N nuclei. The explosive detection system was built based on a fully high-voltage-shielded, axial D-D neutron generator with a radio frequency (RF) driven ion source and nominal yield of about 10(10) fast neutrons per second (E=2.5MeV). Polyethylene and paraffin were used as moderators with borated polyethylene and lead as neutron and gamma ray shielding, respectively. The shape and the thickness of the moderators and shields are optimized to produce the highest thermal neutron flux at the position of the explosive and the minimum total dose at the outer surfaces of the explosive detection system walls. In addition, simulation of the response functions of NaI, BGO, and LaBr3-based γ-ray detectors to different explosives is described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Developing a cosmic ray muon sampling capability for muon tomography and monitoring applications
NASA Astrophysics Data System (ADS)
Chatzidakis, S.; Chrysikopoulou, S.; Tsoukalas, L. H.
2015-12-01
In this study, a cosmic ray muon sampling capability using a phenomenological model that captures the main characteristics of the experimentally measured spectrum coupled with a set of statistical algorithms is developed. The "muon generator" produces muons with zenith angles in the range 0-90° and energies in the range 1-100 GeV and is suitable for Monte Carlo simulations with emphasis on muon tomographic and monitoring applications. The muon energy distribution is described by the Smith and Duller (1959) [35] phenomenological model. Statistical algorithms are then employed for generating random samples. The inverse transform provides a means to generate samples from the muon angular distribution, whereas the Acceptance-Rejection and Metropolis-Hastings algorithms are employed to provide the energy component. The predictions for muon energies 1-60 GeV and zenith angles 0-90° are validated with a series of actual spectrum measurements and with estimates from the software library CRY. The results confirm the validity of the phenomenological model and the applicability of the statistical algorithms to generate polyenergetic-polydirectional muons. The response of the algorithms and the impact of critical parameters on computation time and computed results were investigated. Final output from the proposed "muon generator" is a look-up table that contains the sampled muon angles and energies and can be easily integrated into Monte Carlo particle simulation codes such as Geant4 and MCNP.
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Monte Carlo simulation models of breeding-population advancement.
J.N. King; G.R. Johnson
1993-01-01
Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar; Mohammadi, Mohammad
2017-05-01
A combination of Finite Difference Time Domain (FDTD) and Monte Carlo (MC) methods is proposed for simulation and analysis of ZnO microscintillators grown in polycarbonate membrane. A planar 10 keV X-ray source irradiating the detector is simulated by MC method, which provides the amount of absorbed X-ray energy in the assembly. The transport of generated UV scintillation light and its propagation in the detector was studied by the FDTD method. Detector responses to different probable scintillation sites and under different energies of X-ray source from 10 to 25 keV are reported. Finally, the tapered geometry for the scintillators is proposed, which shows enhanced spatial resolution in comparison to cylindrical geometry for imaging applications.
Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.
ERIC Educational Resources Information Center
Oulman, Charles S.; Lee, Motoko Y.
Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…
A Monte Carlo Application to Approximate the Integral from a to b of e Raised to the x Squared.
ERIC Educational Resources Information Center
Easterday, Kenneth; Smith, Tommy
1992-01-01
Proposes an alternative means of approximating the value of complex integrals, the Monte Carlo procedure. Incorporating a discrete approach and probability, an approximation is obtained from the ratio of computer-generated points falling under the curve to the number of points generated in a predetermined rectangle. (MDH)
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
The Cherenkov Telescope Array production system for Monte Carlo simulations and analysis
NASA Astrophysics Data System (ADS)
Arrabito, L.; Bernloehr, K.; Bregeon, J.; Cumani, P.; Hassan, T.; Haupt, A.; Maier, G.; Moralejo, A.; Neyroud, N.; pre="for the"> CTA Consortium, 2017-10-01 The Cherenkov Telescope Array (CTA), an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale, is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27 PB/year, including archive and data processing. The start of CTA operation is foreseen in 2018 and it will last about 30 years. The installation of the first telescopes in the two selected locations (Paranal, Chile and La Palma, Spain) will start in 2017. In order to select the best site candidate to host CTA telescopes (in the Northern and in the Southern hemispheres), massive Monte Carlo simulations have been performed since 2012. Once the two sites have been selected, we have started new Monte Carlo simulations to determine the optimal array layout with respect to the obtained sensitivity. Taking into account that CTA may be finally composed of 7 different telescope types coming in 3 different sizes, many different combinations of telescope position and multiplicity as a function of the telescope type have been proposed. This last Monte Carlo campaign represented a huge computational effort, since several hundreds of telescope positions have been simulated, while for future instrument response function simulations, only the operating telescopes will be considered. In particular, during the last 18 months, about 2 PB of Monte Carlo data have been produced and processed with different analysis chains, with a corresponding overall CPU consumption of about 125 M HS06 hours. In these proceedings, we describe the employed computing model, based on the use of grid resources, as well as the production system setup, which relies on the DIRAC interware. Finally, we present the envisaged evolutions of the CTA production system for the off-line data processing during CTA operations and the instrument response function simulations.
Monte Carlo Modeling of VLWIR HgCdTe Interdigitated Pixel Response
NASA Astrophysics Data System (ADS)
D'Souza, A. I.; Stapelbroek, M. G.; Wijewarnasuriya, P. S.
2010-07-01
Increasing very long-wave infrared (VLWIR, λ c ≈ 15 μm) pixel operability was approached by subdividing each pixel into four interdigitated subpixels. High response is maintained across the pixel, even if one or two interdigitated subpixels are deselected (turned off), because interdigitation provides that the preponderance of minority carriers photogenerated in the pixel are collected by the selected subpixels. Monte Carlo modeling of the photoresponse of the interdigitated subpixel simulates minority-carrier diffusion from carrier creation to recombination. Each carrier generated at an appropriately weighted random location is assigned an exponentially distributed random lifetime τ i, where < τ i> is the bulk minority-carrier lifetime. The minority carrier is allowed to diffuse for a short time d τ, and the fate of the carrier is decided from its present position and the boundary conditions, i.e., whether the carrier is absorbed in a junction, recombined at a surface, reflected from a surface, or recombined in the bulk because it lived for its designated lifetime. If nothing happens, the process is then repeated until one of the boundary conditions is attained. The next step is to go on to the next carrier and repeat the procedure for all the launches of minority carriers. For each minority carrier launched, the original location and boundary condition at fatality are recorded. An example of the results from Monte Carlo modeling is that, for a 20- μm diffusion length, the calculated quantum efficiency (QE) changed from 85% with no subpixels deselected, to 78% with one subpixel deselected, 67% with two subpixels deselected, and 48% with three subpixels deselected. Demonstration of the interdigitated pixel concept and verification of the Monte Carlo modeling utilized λ c(60 K) ≈ 15 μm HgCdTe pixels in a 96 × 96 array format. The measured collection efficiency for one, two, and three subelements selected, divided by the collection efficiency for all four subelements selected, matched that calculated using Monte Carlo modeling.
Random number generators tested on quantum Monte Carlo simulations.
Hongo, Kenta; Maezono, Ryo; Miura, Kenichi
2010-08-01
We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.
Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.
Estimation of neutron energy distributions from prompt gamma emissions
NASA Astrophysics Data System (ADS)
Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.
2017-11-01
A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.
The Monte Carlo simulation of the Borexino detector
NASA Astrophysics Data System (ADS)
Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.
2018-01-01
We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.
Mobit, P
2002-01-01
The energy responses of LiF-TLDs irradiated in megavoltage electron and photon beams have been determined experimentally by many investigators over the past 35 years but the results vary considerably. General cavity theory has been used to model some of the experimental findings but the predictions of these cavity theories differ from each other and from measurements by more than 13%. Recently, two groups or investigators using Monte Carlo simulations and careful experimental techniques showed that the energy response of 1 mm or 2 mm thick LiF-TLD irradiated by megavoltage photon and electron beams is not more than 5% less than unity for low-Z phantom materials like water or Perspex. However, when the depth of irradiation is significantly different from dmax and the TLD size is more than 5 mm, then the energy response is up to 12% less than unity for incident electron beams. Monte Carlo simulations of some of the experiments reported in the literature showed that some of the contradictory experimental results are reproducible with Monte Carlo simulations. Monte Carlo simulations show that the energy response of LiF-TLDs depends on the size of detector used in electron beams, the depth of irradiation and the incident electron energy. Other differences can be attributed to absolute dose determination and precision of the TL technique. Monte Carlo simulations have also been used to evaluate some of the published general cavity theories. The results show that some of the parameters used to evaluate Burlin's general cavity theory are wrong by factor of 3. Despite this, the estimation of the energy response for most clinical situations using Burlin's cavity equation agrees with Monte Carlo simulations within 1%.
NASA Astrophysics Data System (ADS)
Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.
2006-06-01
In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less
Fast orthogonal transforms and generation of Brownian paths
Leobacher, Gunther
2012-01-01
We present a number of fast constructions of discrete Brownian paths that can be used as alternatives to principal component analysis and Brownian bridge for stratified Monte Carlo and quasi-Monte Carlo. By fast we mean that a path of length n can be generated in O(nlog(n)) floating point operations. We highlight some of the connections between the different constructions and we provide some numerical examples. PMID:23471545
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
A new response matrix for a 6LiI scintillator BSS system
NASA Astrophysics Data System (ADS)
Lacerda, M. A. S.; Méndez-Villafañe, R.; Lorente, A.; Ibañez, S.; Gallego, E.; Vega-Carrillo, H. R.
2017-10-01
A new response matrix was calculated for a Bonner Sphere Spectrometer (BSS) with a 6 LiI(Eu) scintillator, using the Monte Carlo N-Particle radiation transport code MCNPX. Responses were calculated for 6 spheres and the bare detector, for energies varying from 1.059E(-9) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 221 energy groups. A comparison was done among the responses obtained in this work and other published elsewhere, for the same detector model. The calculated response functions were inserted in the response input file of the MAXED code and used to unfold the total and direct neutron spectra generated by the 241Am-Be source of the Universidad Politécnica de Madrid (UPM). These spectra were compared with those obtained using the same unfolding code with the Mares and Schraube matrix response.
Monte Carlo simulation of EAS generated by 10(14) - 10(16) eV protons
NASA Technical Reports Server (NTRS)
Fenyves, E. J.; Yunn, B. C.; Stanev, T.
1985-01-01
Detailed Monte Carlo simulations of extensive air showers to be detected by the Homestake Surface Underground Telescope and other similar detectors located at sea level and mountain altitudes have been performed for 10 to the 14th power to 10 to the 16th power eV primary energies. The results of these Monte Carlo calculations will provide an opportunity to compare the experimental data with different models for the composition and spectra of primaries and for the development of air showers. The results obtained for extensive air showers generated by 10 to the 14th power to 10 to the 16th power eV primary protons are reported.
Reliability evaluation of microgrid considering incentive-based demand response
NASA Astrophysics Data System (ADS)
Huang, Ting-Cheng; Zhang, Yong-Jun
2017-07-01
Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.
GE781: a Monte Carlo package for fixed target experiments
NASA Astrophysics Data System (ADS)
Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.
The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.
Itoga, Toshiro; Asano, Yoshihiro; Tanimura, Yoshihiko
2011-07-01
Superheated drop detectors are currently used for personal and environmental dosimetry and their characteristics such as response to neutrons and temperature dependency are well known. A new bubble counter based on the superheated drop technology has been developed by Framework Scientific. However, the response of this detector with the lead shell is not clear especially above several tens of MeV. In this study, the response has been measured with quasi-monoenergetic and monoenergetic neutron sources with and without a lead shell. The experimental results were compared with the results of the Monte Carlo calculations using the 'Event Generator Mode' in the PHITS code with the JENDL-HE/2007 data library to clarify the response of this detector with a lead shell in the entire energy range.
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Verification of unfold error estimates in the unfold operator code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W
2008-07-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.
Darafsheh, Arash; Taleei, Reza; Kassaee, Alireza; Finlay, Jarod C
2016-11-01
Proton beam dosimetry using bare plastic optical fibers has emerged as a simple approach to proton beam dosimetry. The source of the signal in this method has been attributed to Čerenkov radiation. The aim of this work was a phenomenological study of the nature of the visible light responsible for the signal in bare fiber optic dosimetry of proton therapy beams. Plastic fiber optic probes embedded in solid water phantoms were irradiated with proton beams of energies 100, 180, and 225 MeV produced by a proton therapy cyclotron. Luminescence spectroscopy was performed by a CCD-coupled spectrometer. The spectra were acquired at various depths in phantom to measure the percentage depth dose (PDD) for each beam energy. For comparison, the PDD curves were acquired using a standard multilayer ion chamber device. In order to further analyze the contribution of the Čerenkov radiation in the spectra, Monte Carlo simulation was performed using fluka Monte Carlo code to stochastically simulate radiation transport, ionizing radiation dose deposition, and optical emission of Čerenkov radiation. The measured depth doses using the bare fiber are in agreement with measurements performed by the multilayer ion chamber device, indicating the feasibility of using bare fiber probes for proton beam dosimetry. The spectroscopic study of proton-irradiated fibers showed a continuous spectrum with a shape different from that of Čerenkov radiation. The Monte Carlo simulations confirmed that the amount of the generated Čerenkov light does not follow the radiation absorbed dose in a medium. The source of the optical signal responsible for the proton dose measurement using bare optical fibers is not Čerenkov radiation. It is fluorescence of the plastic material of the fiber.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Maslowski, Alexander; Wang, Adam; Sun, Mingshan; Wareing, Todd; Davis, Ian; Star-Lack, Josh
2018-05-01
To describe Acuros ® CTS, a new software tool for rapidly and accurately estimating scatter in x-ray projection images by deterministically solving the linear Boltzmann transport equation (LBTE). The LBTE describes the behavior of particles as they interact with an object across spatial, energy, and directional (propagation) domains. Acuros CTS deterministically solves the LBTE by modeling photon transport associated with an x-ray projection in three main steps: (a) Ray tracing photons from the x-ray source into the object where they experience their first scattering event and form scattering sources. (b) Propagating photons from their first scattering sources across the object in all directions to form second scattering sources, then repeating this process until all high-order scattering sources are computed using the source iteration method. (c) Ray-tracing photons from scattering sources within the object to the detector, accounting for the detector's energy and anti-scatter grid responses. To make this process computationally tractable, a combination of analytical and discrete methods is applied. The three domains are discretized using the Linear Discontinuous Finite Elements, Multigroup, and Discrete Ordinates methods, respectively, which confer the ability to maintain the accuracy of a continuous solution. Furthermore, through the implementation in CUDA, we sought to exploit the parallel computing capabilities of graphics processing units (GPUs) to achieve the speeds required for clinical utilization. Acuros CTS was validated against Geant4 Monte Carlo simulations using two digital phantoms: (a) a water phantom containing lung, air, and bone inserts (WLAB phantom) and (b) a pelvis phantom derived from a clinical CT dataset. For these studies, we modeled the TrueBeam ® (Varian Medical Systems, Palo Alto, CA) kV imaging system with a source energy of 125 kVp. The imager comprised a 600 μm-thick Cesium Iodide (CsI) scintillator and a 10:1 one-dimensional anti-scatter grid. For the WLAB studies, the full-fan geometry without a bowtie filter was used (with and without the anti-scatter grid). For the pelvis phantom studies, a half-fan geometry with bowtie was used (with the anti-scatter grid). Scattered and primary photon fluences and energies deposited in the detector were recorded. The Acuros CTS and Monte Carlo results demonstrated excellent agreement. For the WLAB studies, the average percent difference between the Monte Carlo- and Acuros-generated scattered photon fluences at the face of the detector was -0.7%. After including the detector response, the average percent differences between the Monte Carlo- and Acuros-generated scatter fractions (SF) were -0.1% without the grid and 0.6% with the grid. For the digital pelvis simulation, the Monte Carlo- and Acuros-generated SFs agreed to within 0.1% on average, despite the scatter-to-primary ratios (SPRs) being as high as 5.5. The Acuros CTS computation time for each scatter image was ~1 s using a single GPU. Acuros CTS enables a fast and accurate calculation of scatter images by deterministically solving the LBTE thus offering a computationally attractive alternative to Monte Carlo methods. Part II describes the application of Acuros CTS to scatter correction of CBCT scans on the TrueBeam system. © 2018 American Association of Physicists in Medicine.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Comparisons of neutrino event generators from an oscillation-experiment perspective
NASA Astrophysics Data System (ADS)
Mayer, Nathan
2015-05-01
Monte Carlo generators are crucial to the analysis of high energy physics data, ideally giving a baseline comparison between the state-of-art theoretical models and experimental data. Presented here is a comparison between three of final state distributions from the GENIE, Neut, NUANCE, and NuWro neutrino Monte Carlo event generators. The final state distributions chosen for comparison are: the electromagnetic energy fraction in neutral current interactions, the energy of the leading π0 vs. the scattering angle for neutral current interactions, and the muon energy vs. scattering angle of νµ charged current interactions.
A Monte Carlo studies of the entrance foil material in a target assembly for FDG production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merouani, A.; El Khayati, N.; EL Ghayour, A.
2015-07-01
In this work, a Monte Carlo simulation was performed for different entrance foil Materials in the target assembly for [{sup 18}F] FDG production, to investigate the neutron generations in the entrance foil. However, the objective is to study a materials that has the more or less similar mechanical properties as the Havar{sup R} foil with less generation of secondary particles and without affecting, the yield of FDG production. (authors)
Predictions of the electro-mechanical response of conductive CNT-polymer composites
NASA Astrophysics Data System (ADS)
Matos, Miguel A. S.; Tagarielli, Vito L.; Baiz-Villafranca, Pedro M.; Pinho, Silvestre T.
2018-05-01
We present finite element simulations to predict the conductivity, elastic response and strain-sensing capability of conductive composites comprising a polymeric matrix and carbon nanotubes. Realistic representative volume elements (RVE) of the microstructure are generated and both constituents are modelled as linear elastic solids, with resistivity independent of strain; the electrical contact between nanotubes is represented by a new element which accounts for quantum tunnelling effects and captures the sensitivity of conductivity to separation. Monte Carlo simulations are conducted and the sensitivity of the predictions to RVE size is explored. Predictions of modulus and conductivity are found in good agreement with published results. The strain-sensing capability of the material is explored for multiaxial strain states.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
NASA Astrophysics Data System (ADS)
Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.
2015-01-01
DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.
Validation of a Monte Carlo Simulation of Binary Time Series.
1981-09-18
the probability distribution corresponding to the population from which the n sample vectors are generated. Simple unbiased estimators were chosen for...Cowcept A s*us Agew Bethesd, Marylnd H. L. Wauom Am D. RoQuE SymMS Reserch Brach , p" Ssms Delsbian September 18, 1981 DTIC EL E C T E SEP 24 =I98ST...is generated from the sample of such vectors produced by several independent replications of the Monte Carlo simulation. Then the validity of the
Monte Carlos of the new generation: status and progress
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frixione, Stefano
2005-03-22
Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron.
The Use of Monte Carlo Techniques to Teach Probability.
ERIC Educational Resources Information Center
Newell, G. J.; MacFarlane, J. D.
1985-01-01
Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…
Santibáñez, M; Guillen, Y; Chacón, D; Figueroa, R G; Valente, M
2018-04-11
This work reports the experimental development of an integral Gd-infused dosimeter suitable for Gd dose enhancement assessment along with Monte Carlo simulations applied to determine the dose enhancement by radioactive and X-ray sources of interest in conventional and electronic brachytherapy. In this context, capability to elaborate a stable and reliable Gd-infused dosimeter was the first goal aimed at direct and accurate measurements of dose enhancement due to Gd presence. Dose-response was characterized for standard and Gd-infused PAGAT polymer gel dosimeters by means of optical transmission/absorbance. The developed Gd-infused PAGAT dosimeters demonstrated to be stable presenting similar dose-response as standard PAGAT within a linear trend up to 13 Gy along with good post-irradiation readout stability verified at 24 and 48 h. Additionally, dose enhancement was evaluated for Gd-infused PAGAT dosimeters by means of Monte Carlo (PENELOPE) simulations considering scenarios for isotopic and X-ray generator sources. The obtained results demonstrated the feasibility of obtaining a maximum enhancement around of (14 ± 1)% for 192 Ir source and an average enhancement of (70 ± 13)% for 241 Am. However, dose enhancement up to (267 ± 18)% may be achieved if suitable filtering is added to the 241 Am source. On the other hand, optimized X-ray spectra may attain dose enhancements up to (253 ± 22) %, which constitutes a promising future alternative for replacing radioactive sources by implementing electronic brachytherapy achieving high dose levels. Copyright © 2018. Published by Elsevier Ltd.
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lantz, E.; Tegen, S.
2009-08-01
Job generation has been a part of the national dialogue surrounding energy policy and renewable energy (RE) for many years. RE advocates tout the ability of renewable energy to support new job opportunities in rural America and the manufacturing sector. Others argue that spending on renewable energy is an inefficient allocation of resources and can result in job losses in the broader economy. The report, Study of the Effects on Employment of Public Aid to Renewable Energy Sources, from King Juan Carlos University in Spain, is one recent addition to this debate. This report asserts that, on average, every renewablemore » energy job in Spain 'destroyed' 2.2 jobs in the broader Spanish economy. The authors also apply this ratio to the U.S. context to estimate expected job loss from renewable energy development and policy in the United States. This memo discusses fundamental and technical limitations of the analysis by King Juan Carlos University and notes critical assumptions implicit in the ultimate conclusions of their work. The memo also includes a review of traditional employment impact analyses that rely on accepted, peer-reviewed methodologies, and it highlights specific variables that can significantly influence the results of traditional employment impact analysis.« less
Free Vibration of Uncertain Unsymmetrically Laminated Beams
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Goyal, Vijay K.
2001-01-01
Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.
Kasesaz, Y; Khalafi, H; Rahmani, F
2013-12-01
Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.
2017-12-01
We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.
NASA Technical Reports Server (NTRS)
1976-01-01
The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.
Cultural Consensus Theory: Aggregating Continuous Responses in a Finite Interval
NASA Astrophysics Data System (ADS)
Batchelder, William H.; Strashny, Alex; Romney, A. Kimball
Cultural consensus theory (CCT) consists of cognitive models for aggregating responses of "informants" to test items about some domain of their shared cultural knowledge. This paper develops a CCT model for items requiring bounded numerical responses, e.g. probability estimates, confidence judgments, or similarity judgments. The model assumes that each item generates a latent random representation in each informant, with mean equal to the consensus answer and variance depending jointly on the informant and the location of the consensus answer. The manifest responses may reflect biases of the informants. Markov Chain Monte Carlo (MCMC) methods were used to estimate the model, and simulation studies validated the approach. The model was applied to an existing cross-cultural dataset involving native Japanese and English speakers judging the similarity of emotion terms. The results sharpened earlier studies that showed that both cultures appear to have very similar cognitive representations of emotion terms.
NASA Astrophysics Data System (ADS)
Nesti, Alice; Mediero, Luis; Garrote, Luis; Caporali, Enrica
2010-05-01
An automatic probabilistic calibration method for distributed rainfall-runoff models is presented. The high number of parameters in hydrologic distributed models makes special demands on the optimization procedure to estimate model parameters. With the proposed technique it is possible to reduce the complexity of calibration while maintaining adequate model predictions. The first step of the calibration procedure of the main model parameters is done manually with the aim to identify their variation range. Afterwards a Monte-Carlo technique is applied, which consists on repetitive model simulations with randomly generated parameters. The Monte Carlo Analysis Toolbox (MCAT) includes a number of analysis methods to evaluate the results of these Monte Carlo parameter sampling experiments. The study investigates the use of a global sensitivity analysis as a screening tool to reduce the parametric dimensionality of multi-objective hydrological model calibration problems, while maximizing the information extracted from hydrological response data. The method is applied to the calibration of the RIBS flood forecasting model in the Harod river basin, placed on Israel. The Harod basin has an extension of 180 km2. The catchment has a Mediterranean climate and it is mainly characterized by a desert landscape, with a soil that is able to absorb large quantities of rainfall and at the same time is capable to generate high peaks of discharge. Radar rainfall data with 6 minute temporal resolution are available as input to the model. The aim of the study is the validation of the model for real-time flood forecasting, in order to evaluate the benefits of improved precipitation forecasting within the FLASH European project.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Monte Carlo generators for studies of the 3D structure of the nucleon
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
Radon detection in conical diffusion chambers: Monte Carlo calculations and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rickards, J.; Golzarri, J. I.; Espinosa, G., E-mail: espinosa@fisica.unam.mx
2015-07-23
The operation of radon detection diffusion chambers of truncated conical shape was studied using Monte Carlo calculations. The efficiency was studied for alpha particles generated randomly in the volume of the chamber, and progeny generated randomly on the interior surface, which reach track detectors placed in different positions within the chamber. Incidence angular distributions, incidence energy spectra and path length distributions are calculated. Cases studied include different positions of the detector within the chamber, varying atmospheric pressure, and introducing a cutoff incidence angle and energy.
Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tippayakul, C.; Ivanov, K.; Misu, S.
2006-07-01
This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross sectionmore » library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)« less
Geant4 hadronic physics for space radiation environment.
Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L
2012-01-01
To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsui, S., E-mail: smatsui@gpi.ac.jp; Mori, Y.; Nonaka, T.
2016-05-15
For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films andmore » Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.« less
Matsui, S; Mori, Y; Nonaka, T; Hattori, T; Kasamatsu, Y; Haraguchi, D; Watanabe, Y; Uchiyama, K; Ishikawa, M
2016-05-01
For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films and Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
NASA Astrophysics Data System (ADS)
Klouch, Nawel; Riane, Houaria; Hamdache, Fatima; Addi, Djamel
2013-05-01
We are interested in modeling the interaction between light and biological tissue from the Monte Carlo method which is an approach used to solve modeling problems in different physical domains. Through the Monte Carlo approach we are going to try to interpret the spectral response absorption, reflectance, transmittance of normal human tissue under its three dominant tints in the visible range (350-700) nm. Then we will focus on the spectral response of the human tissue with varicosities in order to determinate the optimal conditions of operating the semiconductor laser for esthetic aim.
A united event grand canonical Monte Carlo study of partially doped polyaniline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byshkin, M. S., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it; Correa, A.; Buonocore, F.
2013-12-28
A Grand Canonical Monte Carlo scheme, based on united events combining protonation/deprotonation and insertion/deletion of HCl molecules is proposed for the generation of polyaniline structures at intermediate doping levels between 0% (PANI EB) and 100% (PANI ES). A procedure based on this scheme and subsequent structure relaxations using molecular dynamics is described and validated. Using the proposed scheme and the corresponding procedure, atomistic models of amorphous PANI-HCl structures were generated and studied at different doping levels. Density, structure factors, and solubility parameters were calculated. Their values agree well with available experimental data. The interactions of HCl with PANI have beenmore » studied and distribution of their energies has been analyzed. The procedure has also been extended to the generation of PANI models including adsorbed water and the effect of inclusion of water molecules on PANI properties has also been modeled and discussed. The protocol described here is general and the proposed United Event Grand Canonical Monte Carlo scheme can be easily extended to similar polymeric materials used in gas sensing and to other systems involving adsorption and chemical reactions steps.« less
Theory and Performance of AIMS for Active Interrogation
NASA Astrophysics Data System (ADS)
Walters, William J.; Royston, Katherine E. K.; Haghighat, Alireza
2014-06-01
A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) determination of neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, γ) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water. In the first step, a response-function formulation has been developed to calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, γ) cross sections to find the resulting gamma source distribution. Finally, in the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma flux at a detector window. A code, AIMS (Active Interrogation for Monitoring Special-Nuclear-materials), has been written to output the gamma current for an source-detector assembly scanning across the cargo using the pre-calculated values and takes significantly less time than a reference MCNP5 calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2011-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
NASA Astrophysics Data System (ADS)
Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu
2017-03-01
Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.
MUSiC - A general search for deviations from monte carlo predictions in CMS
NASA Astrophysics Data System (ADS)
Biallass, Philipp A.; CMS Collaboration
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
NASA Astrophysics Data System (ADS)
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Rapid Monte Carlo Simulation of Gravitational Wave Galaxies
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2015-01-01
With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.
NASA Astrophysics Data System (ADS)
Alves Júnior, A. A.; Sokoloff, M. D.
2017-10-01
MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
NASA Astrophysics Data System (ADS)
Leetmaa, Mikael; Skorodumova, Natalia V.
2015-11-01
We here present a revised version, v1.1, of the KMCLib general framework for kinetic Monte-Carlo (KMC) simulations. The generation of random numbers in KMCLib now relies on the C++11 standard library implementation, and support has been added for the user to choose from a set of C++11 implemented random number generators. The Mersenne-twister, the 24 and 48 bit RANLUX and a 'minimal-standard' PRNG are supported. We have also included the possibility to use true random numbers via the C++11 std::random_device generator. This release also includes technical updates to support the use of an extended range of operating systems and compilers.
NASA Astrophysics Data System (ADS)
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
Monte Carlo simulation of MOSFET dosimeter for electron backscatter using the GEANT4 code.
Chow, James C L; Leung, Michael K K
2008-06-01
The aim of this study is to investigate the influence of the body of the metal-oxide-semiconductor field effect transistor (MOSFET) dosimeter in measuring the electron backscatter from lead. The electron backscatter factor (EBF), which is defined as the ratio of dose at the tissue-lead interface to the dose at the same point without the presence of backscatter, was calculated by the Monte Carlo simulation using the GEANT4 code. Electron beams with energies of 4, 6, 9, and 12 MeV were used in the simulation. It was found that in the presence of the MOSFET body, the EBFs were underestimated by about 2%-0.9% for electron beam energies of 4-12 MeV, respectively. The trend of the decrease of EBF with an increase of electron energy can be explained by the small MOSFET dosimeter, mainly made of epoxy and silicon, not only attenuated the electron fluence of the electron beam from upstream, but also the electron backscatter generated by the lead underneath the dosimeter. However, this variation of the EBF underestimation is within the same order of the statistical uncertainties as the Monte Carlo simulations, which ranged from 1.3% to 0.8% for the electron energies of 4-12 MeV, due to the small dosimetric volume. Such small EBF deviation is therefore insignificant when the uncertainty of the Monte Carlo simulation is taken into account. Corresponding measurements were carried out and uncertainties compared to Monte Carlo results were within +/- 2%. Spectra of energy deposited by the backscattered electrons in dosimetric volumes with and without the lead and MOSFET were determined by Monte Carlo simulations. It was found that in both cases, when the MOSFET body is either present or absent in the simulation, deviations of electron energy spectra with and without the lead decrease with an increase of the electron beam energy. Moreover, the softer spectrum of the backscattered electron when lead is present can result in a reduction of the MOSFET response due to stronger recombination in the SiO2 gate. It is concluded that the MOSFET dosimeter performed well for measuring the electron backscatter from lead using electron beams. The uncertainty of EBF determined by comparing the results of Monte Carlo simulations and measurements is well within the accuracy of the MOSFET dosimeter (< +/- 4.2%) provided by the manufacturer.
Calculating the Responses of Self-Powered Radiation Detectors.
NASA Astrophysics Data System (ADS)
Thornton, D. A.
Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.
NASA Astrophysics Data System (ADS)
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Meyer, Arnd
2010-02-01
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-01
Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
NASA Astrophysics Data System (ADS)
Honda, Norihiro; Hazama, Hisanao; Awazu, Kunio
2017-02-01
The interstitial photodynamic therapy (iPDT) with 5-aminolevulinic acid (5-ALA) is a safe and feasible treatment modality of malignant glioblastoma. In order to cover the tumour volume, the exact position of the light diffusers within the lesion is needed to decide precisely. The aim of this study is the development of evaluation method of treatment volume with 3D Monte Carlo simulation for iPDT using 5-ALA. Monte Carlo simulations of fluence rate were performed using the optical properties of the brain tissue infiltrated by tumor cells and normal tissue. 3-D Monte Carlo simulation was used to calculate the position of the light diffusers within the lesion and light transport. The fluence rate near the diffuser was maximum and decreased exponentially with distance. The simulation can calculate the amount of singlet oxygen generated by PDT. In order to increase the accuracy of simulation results, the parameter for simulation includes the quantum yield of singlet oxygen generation, the accumulated concentration of photosensitizer within tissue, fluence rate, molar extinction coefficient at the wavelength of excitation light. The simulation is useful for evaluation of treatment region of iPDT with 5-ALA.
Wang, Deli; Xu, Wei; Zhao, Xiangrong
2016-03-01
This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.
Systematic uncertainties in long-baseline neutrino-oscillation experiments
NASA Astrophysics Data System (ADS)
Ankowski, Artur M.; Mariani, Camillo
2017-05-01
Future neutrino-oscillation experiments are expected to bring definite answers to the questions of neutrino-mass hierarchy and violation of charge-parity symmetry in the lepton-sector. To realize this ambitious program it is necessary to ensure a significant reduction of uncertainties, particularly those related to neutrino-energy reconstruction. In this paper, we discuss different sources of systematic uncertainties, paying special attention to those arising from nuclear effects and detector response. By analyzing nuclear effects we show the importance of developing accurate theoretical models, capable of providing a quantitative description of neutrino cross sections, together with the relevance of their implementation in Monte Carlo generators and extensive testing against lepton-scattering data. We also point out the fundamental role of efforts aiming to determine detector responses in test-beam exposures.
HepSim: A repository with predictions for high-energy physics experiments
Chekanov, S. V.
2015-02-03
A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.
NASA Astrophysics Data System (ADS)
Tylko, Grzegorz; Dubchak, Sergyi; Banach, Zuzanna; Turnau, Katarzyna
2010-04-01
Monte Carlo simulations of gelatin matrices with known elemental concentrations confirmed the suitability of protein standards to quantify elements of cellulose material in x-ray microanalysis. However, gelatin standards and cellulose plant cell walls differ in structure, what influences x-ray generation and emission in both specimens. The goal of the project was to establish the influence of gelatin structure on x-ray generation and its usefulness to calculate elemental concentrations in plant cell walls of different width. Roots of Medicago truncatula as well as gelatin standards with known elemental composition were prepared according to freeze-drying protocols. The thermanox polymer was chosen to establish background formation for flat and compact organic materials. All analyses were performed with the scanning electron microscope operated at 10 keV and probe current of 350 pA. The Monte Carlo code Casino was applied to calculate the intensities of the generated and the emitted x-rays from biological matrix of different width. No topography effects of gelatin structure were visible when the raster mode of electron impact was applied to the specimen. Monte Carlo simulations of gelatin of different width revealed that a significant decrease of the generated x-ray intensity appears at the width of the specimen around 3.5 μm. However, an increase of emission of low energy x-ray intensities (Na, Mg) was noted at 3.5 μm size with constant emission of higher energy x-rays (Cl, K) down to 2.5 μm width. It determines the minimal size of plant specimen useful for comparison to bulk gelatin standard when quantitative analysis is performed for biologically important elements.
Problems with the random number generator RANF implemented on the CDC cyber 205
NASA Astrophysics Data System (ADS)
Kalle, Claus; Wansleben, Stephan
1984-10-01
We show that using RANF may lead to wrong results when lattice models are simulated by Monte Carlo methods. We present a shift-register sequence random number generator which generates two random numbers per cycle on a two pipe CDC Cyber 205.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method
2016-01-01
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
Das, R K; Li, Z; Perera, H; Williamson, J F
1996-06-01
Practical dosimeters in brachytherapy, such as thermoluminescent dosimeters (TLD) and diodes, are usually calibrated against low-energy megavoltage beams. To measure absolute dose rate near a brachytherapy source, it is necessary to establish the energy response of the detector relative to that of the calibration energy. The purpose of this paper is to assess the accuracy of Monte Carlo photon transport (MCPT) simulation in modelling the absolute detector response as a function of detector geometry and photon energy. We have exposed two different sizes of TLD-100 (LiF chips) and p-type silicon diode detectors to calibrated 60Co, HDR source (192Ir) and superficial x-ray beams. For the Scanditronix electron-field diode, the relative detector response, defined as the measured detector readings per measured unit of air kerma, varied from 38.46 V cGy-1 (40 kVp beam) to 6.22 V cGy-1 (60Co beam). Similarly for the large and small chips the same quantity varied from 2.08-3.02 nC cGy-1 and 0.171-0.244 nC cGy-1, respectively. Monte Carlo simulation was used to calculate the absorbed dose to the active volume of the detector per unit air kerma. If the Monte Carlo simulation is accurate, then the absolute detector response, which is defined as the measured detector reading per unit dose absorbed by the active detector volume, and is calculated by Monte Carlo simulation, should be a constant. For the diode, the absolute response is 5.86 +/- 0.15 (V cGy-1). For TLDs of size 3 x 3 x 1 mm3 the absolute response is 2.47 +/- 0.07 (nC cGy-1) and for TLDs of 1 x 1 x 1 mm3 it is 0.201 +/- 0.008 (nC cGy-1). From the above results we can conclude that the absolute response function of detectors (TLDs and diodes) is directly proportional to absorbed dose by the active volume of the detector and is independent of beam quality.
NASA Astrophysics Data System (ADS)
Henderson, Alexander Hastings
Lasers have grown more powerful in recent years, opening up new frontiers in physics. From early intensities of less than 1010 W/cm 2, lasers can now achieve intensities over 1021 W/cm 2. Ultraintense laser can become powerful new tools to produce relativistic electrons, positron-electron pairs, and gamma-rays. The pair production efficiency is equal to or greater than that of linear accelerators, the most common method of antimatter generation in the past. The gamma-rays and electrons produced can be highly collimated, making these interactions of interest for beam generation. Monte-Carlo particle transport simulation has long been used in physics for simulating various particle and radiation processes, and is well-suited to simulating both electromagnetic cascades resulting from laser-solid interactions and the response of electron/positron spectrometers and gamma-ray detectors. We have used GEANT4 Monte-Carlo particle transport simulation to design and calibrate charged-particle spectrometers using permanent magnets as well as a Forward Compton Electron Spectrometer to measure gamma-rays of higher energies than have previously been achieved. We have had some success simulating and measuring high positron and gamma-rays yields from laser-solid interactions using gold target at the Texas Petawatt Laser (TPW). While similar spectrometers have been developed in the past, we are to our knowledge the first to successfully use permanent magnet spectrometers to detect positrons originating from laser-solid interactions in this energy range. We believe we are also the first to successfully detect multi-MeV gamma rays using a permanent magnet Forward Compton Electron Spectrometer. Monte-Carlo particle transport simulation has been used by other groups to model positron production from laser-solid ineraction, but at the time that we began we were, as far as we know, the first to have a significant amount of empirical data to work with. We were thus at liberty to estimate the initial conditions, compare simulation results to data, and adjust as needed to obtain a better estimate of the actual initial conditions. We have also developed a new method for measuring the yield and angular distribution of gamma-rays using a two-dimensional dosimeter array. In this work, we examine the experimental and simulation results as well as the physical processes behind them. In addition, the gamma-rays produced by our experiments could be useful for photo-nuclear reactors and homeland security purposes. In our experiments, we measured narrow energy-band positrons and electrons which have potential medical uses.
Lee, Kyubin; Kolb, Aaron W.; Sverchkov, Yuriy; Cuellar, Jacqueline A.; Craven, Mark
2015-01-01
ABSTRACT Herpes simplex virus 1 (HSV-1) causes recurrent mucocutaneous ulcers and is the leading cause of infectious blindness and sporadic encephalitis in the United States. HSV-1 has been shown to be highly recombinogenic; however, to date, there has been no genome-wide analysis of recombination. To address this, we generated 40 HSV-1 recombinants derived from two parental strains, OD4 and CJ994. The 40 OD4-CJ994 HSV-1 recombinants were sequenced using the Illumina sequencing system, and recombination breakpoints were determined for each of the recombinants using the Bootscan program. Breakpoints occurring in the terminal inverted repeats were excluded from analysis to prevent double counting, resulting in a total of 272 breakpoints in the data set. By placing windows around the 272 breakpoints followed by Monte Carlo analysis comparing actual data to simulated data, we identified a recombination bias toward both high GC content and intergenic regions. A Monte Carlo analysis also suggested that recombination did not appear to be responsible for the generation of the spontaneous nucleotide mutations detected following sequencing. Additionally, kernel density estimation analysis across the genome found that the large, inverted repeats comprise a recombination hot spot. IMPORTANCE Herpes simplex virus 1 (HSV-1) virus is the leading cause of sporadic encephalitis and blinding keratitis in developed countries. HSV-1 has been shown to be highly recombinogenic, and recombination itself appears to be a significant component of genome replication. To date, there has been no genome-wide analysis of recombination. Here we present the findings of the first genome-wide study of recombination performed by generating and sequencing 40 HSV-1 recombinants derived from the OD4 and CJ994 parental strains, followed by bioinformatics analysis. Recombination breakpoints were determined, yielding 272 breakpoints in the full data set. Kernel density analysis determined that the large inverted repeats constitute a recombination hot spot. Additionally, Monte Carlo analyses found biases toward high GC content and intergenic and repetitive regions. PMID:25926637
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
Virulo is a probabilistic model for predicting virus attenuation. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve a chosen degree o...
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; An, Hyunuk; Kim, Sanghyun
2015-04-01
Soil moisture, a critical factor in hydrologic systems, plays a key role in synthesizing interactions among soil, climate, hydrological response, solute transport and ecosystem dynamics. The spatial and temporal distribution of soil moisture at a hillslope scale is essential for understanding hillslope runoff generation processes. In this study, we implement Monte Carlo simulations in the hillslope scale using a three-dimensional surface-subsurface integrated model (3D model). Numerical simulations are compared with multiple soil moistures which had been measured using TDR(Mini_TRASE) for 22 locations in 2 or 3 depths during a whole year at a hillslope (area: 2100 square meters) located in Bongsunsa Watershed, South Korea. In stochastic simulations via Monte Carlo, uncertainty of the soil parameters and input forcing are considered and model ensembles showing good performance are selected separately for several seasonal periods. The presentation will be focused on the characterization of seasonal variations of model parameters based on simulations with field measurements. In addition, structural limitations of the contemporary modeling method will be discussed.
Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun
2016-09-01
The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less
Mitchell, J. T.; Perepelitsa, D. V.; Tannenbaum, M. J.; ...
2016-05-23
Here, several methods of generating three constituent quarks in a nucleon are evaluated which explicitly maintain the nucleon's center of mass and desired radial distribution and can be used within Monte Carlo Glauber frameworks. The geometric models provided by each method are used to generate distributions over the number of constituent quark participants ( N qp) in p+p,d+Au, and Au+Au collisions. The results are compared with each other and to a previous result of N qp calculations, without this explicit constraint, used in measurements of √S NN = 200 GeV p+p,d+Au, and Au+Au collisions at the BNL Relativistic Heavy Ionmore » Collider.« less
Monte Carlo study on pulse response of underwater optical channel
NASA Astrophysics Data System (ADS)
Li, Jing; Ma, Yong; Zhou, Qunqun; Zhou, Bo; Wang, Hongyuan
2012-06-01
Pulse response of the underwater wireless optical channel is significant for the analysis of channel capacity and error probability. Traditional vector radiative transfer theory (VRT) is not able to deal with the effect of receiving aperture. On the other hand, general water tank experiments cannot acquire an accurate pulse response due to the limited time resolution of the photo-electronic detector. We present a Monte Carlo simulation model to extract the time-domain pulse response undersea. In comparison with the VRT model, a more accurate pulse response for practical ocean communications could be achieved through statistical analysis of the received photons. The proposed model is more reasonable for the study of the underwater optical channel.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
NASA Astrophysics Data System (ADS)
Bury, Marcin; Van Haevermaet, Hans; Van Hameren, Andreas; Van Mechelen, Pierre; Kutak, Krzysztof; Serino, Mirko
2018-05-01
We present calculations of single inclusive jet transverse momentum and energy spectra at forward rapidity (5.2 < y < 6.6) in proton-lead collisions with √{sNN } = 5.02 TeV. The predictions are obtained with the KaTie Monte Carlo event generator, which allows to calculate interactions within the High Energy Factorisation framework. The tree-level matrix element results are subsequently interfaced with the CASCADE Monte Carlo event generator to account for hadronisation. The effects of the saturation of the gluon density, leading to suppression of the cross section, are investigated.
Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration
NASA Technical Reports Server (NTRS)
Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali
2007-01-01
We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.
Using the Quantile Mapping to improve a weather generator
NASA Astrophysics Data System (ADS)
Chen, Y.; Themessl, M.; Gobiet, A.
2012-04-01
We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors.
Zhang, Yajia; Hauser, Kris
2013-01-01
Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors
2013-01-01
Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175
Using Stan for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Au, Chi Hang
2018-01-01
Stan is a flexible probabilistic programming language providing full Bayesian inference through Hamiltonian Monte Carlo algorithms. The benefits of Hamiltonian Monte Carlo include improved efficiency and faster inference, when compared to other MCMC software implementations. Users can interface with Stan through a variety of computing…
Response Matrix Monte Carlo for electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.
1990-11-01
A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less
Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, Jeremy Ed
2016-01-21
The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less
Raman Monte Carlo simulation for light propagation for tissue with embedded objects
NASA Astrophysics Data System (ADS)
Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit
2018-02-01
Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.
NASA Astrophysics Data System (ADS)
Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.
2017-02-01
Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.
Monte Carlo simulation study of positron generation in ultra-intense laser-solid interactions
NASA Astrophysics Data System (ADS)
Yan, Yonghong; Wu, Yuchi; Zhao, Zongqing; Teng, Jian; Yu, Jinqing; Liu, Dongxiao; Dong, Kegong; Wei, Lai; Fan, Wei; Cao, Leifeng; Yao, Zeen; Gu, Yuqiu
2012-02-01
The Monte Carlo transport code Geant4 has been used to study positron production in the transport of laser-produced hot electrons in solid targets. The dependence of the positron yield on target parameters and the hot-electron temperature has been investigated in thick targets (mm-scale), where only the Bethe-Heitler process is considered. The results show that Au is the best target material, and an optimal target thickness exists for generating abundant positrons at a given hot-electron temperature. The positron angular distributions and energy spectra for different hot electron temperatures were studied without considering the sheath field on the back of the target. The effect of the target rear sheath field for positron acceleration was studied by numerical simulation while including an electrostatic field in the Monte Carlo model. It shows that the positron energy can be enhanced and quasi-monoenergetic positrons are observed owing to the effect of the sheath field.
Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School
NASA Astrophysics Data System (ADS)
Lafage, Vincent
2017-11-01
Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.
The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy
NASA Astrophysics Data System (ADS)
Siebers, J. V.; Keall, P. J.; Mohan, R.
The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns.
Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim
2015-06-15
Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system's Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. © The Author 2015. Published by Oxford University Press.
Development and application of a hybrid transport methodology for active interrogation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royston, K.; Walters, W.; Haghighat, A.
A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, 7) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water cargo. To complete the first step, a response-function formulation has been developed tomore » calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, 7) cross sections to find the resulting gamma source distribution. In the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma current at a detector window. The AIMS (Active Interrogation for Monitoring Special-Nuclear-Materials) software has been written to output the gamma current for a source-detector assembly scanning across a cargo container using the pre-calculated values and taking significantly less time than a reference MCNP5 calculation. (authors)« less
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Monte Carlo replica-exchange based ensemble docking of protein conformations.
Zhang, Zhe; Ehmann, Uwe; Zacharias, Martin
2017-05-01
A replica-exchange Monte Carlo (REMC) ensemble docking approach has been developed that allows efficient exploration of protein-protein docking geometries. In addition to Monte Carlo steps in translation and orientation of binding partners, possible conformational changes upon binding are included based on Monte Carlo selection of protein conformations stored as ordered pregenerated conformational ensembles. The conformational ensembles of each binding partner protein were generated by three different approaches starting from the unbound partner protein structure with a range spanning a root mean square deviation of 1-2.5 Å with respect to the unbound structure. Because MC sampling is performed to select appropriate partner conformations on the fly the approach is not limited by the number of conformations in the ensemble compared to ensemble docking of each conformer pair in ensemble cross docking. Although only a fraction of generated conformers was in closer agreement with the bound structure the REMC ensemble docking approach achieved improved docking results compared to REMC docking with only the unbound partner structures or using docking energy minimization methods. The approach has significant potential for further improvement in combination with more realistic structural ensembles and better docking scoring functions. Proteins 2017; 85:924-937. © 2016 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Optimization of beam shaping assembly based on D-T neutron generator and dose evaluation for BNCT
NASA Astrophysics Data System (ADS)
Naeem, Hamza; Chen, Chaobin; Zheng, Huaqing; Song, Jing
2017-04-01
The feasibility of developing an epithermal neutron beam for a boron neutron capture therapy (BNCT) facility based on a high intensity D-T fusion neutron generator (HINEG) and using the Monte Carlo code SuperMC (Super Monte Carlo simulation program for nuclear and radiation process) is proposed in this study. The Monte Carlo code SuperMC is used to determine and optimize the final configuration of the beam shaping assembly (BSA). The optimal BSA design in a cylindrical geometry which consists of a natural uranium sphere (14 cm) as a neutron multiplier, AlF3 and TiF3 as moderators (20 cm each), Cd (1 mm) as a thermal neutron filter, Bi (5 cm) as a gamma shield, and Pb as a reflector and collimator to guide neutrons towards the exit window. The epithermal neutron beam flux of the proposed model is 5.73 × 109 n/cm2s, and other dosimetric parameters for the BNCT reported by IAEA-TECDOC-1223 have been verified. The phantom dose analysis shows that the designed BSA is accurate, efficient and suitable for BNCT applications. Thus, the Monte Carlo code SuperMC is concluded to be capable of simulating the BSA and the dose calculation for BNCT, and high epithermal flux can be achieved using proposed BSA.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
Using Monte Carlo Techniques to Demonstrate the Meaning and Implications of Multicollinearity
ERIC Educational Resources Information Center
Vaughan, Timothy S.; Berry, Kelly E.
2005-01-01
This article presents an in-class Monte Carlo demonstration, designed to demonstrate to students the implications of multicollinearity in a multiple regression study. In the demonstration, students already familiar with multiple regression concepts are presented with a scenario in which the "true" relationship between the response and…
Rocco, Noemi; Lovato, Alessandro; Benhar, Omar
2016-12-23
Here, the electromagnetic responses of carbon obtained from the Green's function Monte Carlo and spectral function approaches using the same dynamical input are compared in the kinematical region corresponding to momentum transfer in the range 300–570 MeV. The results of our analysis, aimed at pinning down the limits of applicability of the approximations involved in the two schemes, indicate that the factorization ansatz underlying the spectral function formalism provides remarkably accurate results down to momentum transfer as low as 300 MeV. On the other hand, it appears that at 570 MeV relativistic corrections to the electromagnetic current not included inmore » the Monte Carlo calculations may play a significant role in the transverse channel.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, Noemi; Lovato, Alessandro; Benhar, Omar
Here, the electromagnetic responses of carbon obtained from the Green's function Monte Carlo and spectral function approaches using the same dynamical input are compared in the kinematical region corresponding to momentum transfer in the range 300–570 MeV. The results of our analysis, aimed at pinning down the limits of applicability of the approximations involved in the two schemes, indicate that the factorization ansatz underlying the spectral function formalism provides remarkably accurate results down to momentum transfer as low as 300 MeV. On the other hand, it appears that at 570 MeV relativistic corrections to the electromagnetic current not included inmore » the Monte Carlo calculations may play a significant role in the transverse channel.« less
Acha, Robert; Brey, Richard; Capello, Kevin
2013-02-01
A torso phantom was developed by the Lawrence Livermore National Laboratory (LLNL) that serves as a standard for intercomparison and intercalibration of detector systems used to measure low-energy photons from radionuclides, such as americium deposited in the lungs. DICOM images of the second-generation Human Monitoring Laboratory-Lawrence Livermore National Laboratory (HML-LLNL) torso phantom were segmented and converted into three-dimensional (3D) voxel phantoms to simulate the response of high purity germanium (HPGe) detector systems, as found in the HML new lung counter using a Monte Carlo technique. The photon energies of interest in this study were 17.5, 26.4, 45.4, 59.5, 122, 244, and 344 keV. The detection efficiencies at these photon energies were predicted for different chest wall thicknesses (1.49 to 6.35 cm) and compared to measured values obtained with lungs containing (241)Am (34.8 kBq) and (152)Eu (10.4 kBq). It was observed that no statistically significant differences exist at the 95% confidence level between the mean values of simulated and measured detection efficiencies. Comparisons between the simulated and measured detection efficiencies reveal a variation of 20% at 17.5 keV and 1% at 59.5 keV. It was found that small changes in the formulation of the tissue substitute material caused no significant change in the outcome of Monte Carlo simulations.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.
Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F
2015-02-01
The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Top Quark Mass Calibration for Monte Carlo Event Generators
NASA Astrophysics Data System (ADS)
Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W.
2016-12-01
The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mtMC. Because of hadronization and parton-shower dynamics, relating mtMC to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e+e- 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, mtMC differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, mtMC≃mt,1 GeV MSR .
A new low-energy bremsstrahlung generator for GEANT4.
Peralta, L; Rodrigues, P; Trindade, A; Pia, M G
2005-01-01
The 2BN bremsstrahlung cross section is a well-adapted distribution to describe the radiative processes at low-electron kinetic energy (E(k) < 500 keV). In this work a method to implement this distribution in a Monte Carlo generator is developed.
Evaluation of runaway-electron effects on plasma-facing components for NET
NASA Astrophysics Data System (ADS)
Bolt, H.; Calén, H.
1991-03-01
Runaway electrons which are generated during disruptions can cause serious damage to plasma facing components in a next generation device like NET. A study was performed to quantify the response of NET plasma facing components to runaway-electron impact. For the determination of the energy deposition in the component materials Monte Carlo computations were performed. Since the subsurface metal structures can be strongly heated under runaway-electron impact from the computed results damage threshold values for the thermal excursions were derived. These damage thresholds are strongly dependent on the materials selection and the component design. For a carbonmolybdenum divertor with 10 and 20 mm carbon armour thickness and 1 degree electron incidence the damage thresholds are 100 MJ/m 2 and 220 MJ/m 2. The thresholds for a carbon-copper divertor under the same conditions are about 50% lower. On the first wall damage is anticipated for energy depositions above 180 MJ/m 2.
Inflation of Unreefed and Reefed Extraction Parachutes
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Varela, Jose G.
2015-01-01
Data from the Orion and several other test programs have been used to reconstruct inflation parameters for 28 ft Do extraction parachutes as well as the parent aircraft pitch response during extraction. The inflation force generated by extraction parachutes is recorded directly during tow tests but is usually inferred from the payload accelerometer during Low Velocity Airdrop Delivery (LVAD) flight test extractions. Inflation parameters are dependent on the type of parent aircraft, number of canopies, and standard vs. high altitude extraction conditions. For standard altitudes, single canopy inflations are modeled as infinite mass, but the non-symmetric inflations in a cluster are modeled as finite mass. High altitude extractions have necessitated reefing the extraction parachutes, which are best modeled as infinite mass for those conditions. Distributions of aircraft pitch profiles and inflation parameters have been generated for use in Monte Carlo simulations of payload extractions.
The responses of three kinds of passive dosimeters to secondary cosmic rays in the lower atmosphere.
Yang, Zhen; Chen, Bo; Zhuo, Weihai; Fan, Dunhuang; Zhao, Chao; Zhang, Yu
2015-12-01
For accurate measurements of the secondary cosmic rays by using passive dosimeters, the relative responses of the thermoluminescence dosimeter (TLD), optically stimulated luminescence (OSL) dosimeter, and radiophotoluminescent glass dosimeter (RPLGD) were studied. The cosmic-ray shower generator was used to simulate the secondary cosmic rays at the sea level. Monte Carlo simulations were performed to calculate the air kerma and absorbed doses in each kind of dosimeter. The results showed that compared with their responses to gamma rays of (137)Cs, the relative responses of the TLD, OSL, and RPLGD were 0.786, 0.707, and 0.735 to the hard component of cosmic rays, respectively, and the values were 0.904, 0.838, and 0.857 to the soft component of cosmic rays, respectively. To verify the simulations results, an in situ measurement with the three kinds of dosimeters was performed at the same place. The results indicated that the secondary cosmic rays monitored with the three kinds of dosimeters were well consistent with each other provided their relative responses were taken into account.
The responses of three kinds of passive dosimeters to secondary cosmic rays in the lower atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhen; Chen, Bo, E-mail: bochenfys@fudan.edu.cn; Zhuo, Weihai
For accurate measurements of the secondary cosmic rays by using passive dosimeters, the relative responses of the thermoluminescence dosimeter (TLD), optically stimulated luminescence (OSL) dosimeter, and radiophotoluminescent glass dosimeter (RPLGD) were studied. The cosmic-ray shower generator was used to simulate the secondary cosmic rays at the sea level. Monte Carlo simulations were performed to calculate the air kerma and absorbed doses in each kind of dosimeter. The results showed that compared with their responses to gamma rays of {sup 137}Cs, the relative responses of the TLD, OSL, and RPLGD were 0.786, 0.707, and 0.735 to the hard component of cosmicmore » rays, respectively, and the values were 0.904, 0.838, and 0.857 to the soft component of cosmic rays, respectively. To verify the simulations results, an in situ measurement with the three kinds of dosimeters was performed at the same place. The results indicated that the secondary cosmic rays monitored with the three kinds of dosimeters were well consistent with each other provided their relative responses were taken into account.« less
The responses of three kinds of passive dosimeters to secondary cosmic rays in the lower atmosphere
NASA Astrophysics Data System (ADS)
Yang, Zhen; Chen, Bo; Zhuo, Weihai; Fan, Dunhuang; Zhao, Chao; Zhang, Yu
2015-12-01
For accurate measurements of the secondary cosmic rays by using passive dosimeters, the relative responses of the thermoluminescence dosimeter (TLD), optically stimulated luminescence (OSL) dosimeter, and radiophotoluminescent glass dosimeter (RPLGD) were studied. The cosmic-ray shower generator was used to simulate the secondary cosmic rays at the sea level. Monte Carlo simulations were performed to calculate the air kerma and absorbed doses in each kind of dosimeter. The results showed that compared with their responses to gamma rays of 137Cs, the relative responses of the TLD, OSL, and RPLGD were 0.786, 0.707, and 0.735 to the hard component of cosmic rays, respectively, and the values were 0.904, 0.838, and 0.857 to the soft component of cosmic rays, respectively. To verify the simulations results, an in situ measurement with the three kinds of dosimeters was performed at the same place. The results indicated that the secondary cosmic rays monitored with the three kinds of dosimeters were well consistent with each other provided their relative responses were taken into account.
Khajepour, Abolhasan; Rahmani, Faezeh
2017-01-01
In this study, a 90 Sr radioisotope thermoelectric generator (RTG) with power of milliWatt was designed to operate in the determined temperature (300-312K). For this purpose, the combination of analytical and Monte Carlo methods with ANSYS and COMSOL software as well as the MCNP code was used. This designed RTG contains 90 Sr as a radioisotope heat source (RHS) and 127 coupled thermoelectric modules (TEMs) based on bismuth telluride. Kapton (2.45mm in thickness) and Cryotherm sheets (0.78mm in thickness) were selected as the thermal insulators of the RHS, as well as a stainless steel container was used as a generator chamber. The initial design of the RHS geometry was performed according to the amount of radioactive material (strontium titanate) as well as the heat transfer calculations and mechanical strength considerations. According to the Monte Carlo simulation performed by the MCNP code, approximately 0.35 kCi of 90 Sr is sufficient to generate heat power in the RHS. To determine the optimal design of the RTG, the distribution of temperature as well as the dissipated heat and input power to the module were calculated in different parts of the generator using the ANSYS software. Output voltage according to temperature distribution on TEM was calculated using COMSOL. Optimization of the dimension of the RHS and heat insulator was performed to adapt the average temperature of the hot plate of TEM to the determined hot temperature value. This designed RTG generates 8mW in power with an efficiency of 1%. This proposed approach of combination method can be used for the precise design of various types of RTGs. Copyright © 2016 Elsevier Ltd. All rights reserved.
The timing resolution of scintillation-detector systems: Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Choong, Woon-Seng
2009-11-01
Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and decreasing transit time spread. However, only substantial improvement in the timing resolution is obtained with improved transit time spread if the first photoelectron timing is less than the transit time spread. While the calculated timing performance does not seem to be affected by the pixel size of the crystal, it improves for an etched crystal compared to a polished crystal. In addition, the calculated timing resolution degrades with increasing crystal length. These observations can be explained by studying the initial photoelectron rate. Experimental measurements provide reasonably good agreement with the calculated timing resolution. The Monte Carlo analysis developed in this work will allow us to optimize the scintillation detectors for timing and to understand the physical factors limiting their performance.
The timing resolution of scintillation-detector systems: Monte Carlo analysis.
Choong, Woon-Seng
2009-11-07
Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and decreasing transit time spread. However, only substantial improvement in the timing resolution is obtained with improved transit time spread if the first photoelectron timing is less than the transit time spread. While the calculated timing performance does not seem to be affected by the pixel size of the crystal, it improves for an etched crystal compared to a polished crystal. In addition, the calculated timing resolution degrades with increasing crystal length. These observations can be explained by studying the initial photoelectron rate. Experimental measurements provide reasonably good agreement with the calculated timing resolution. The Monte Carlo analysis developed in this work will allow us to optimize the scintillation detectors for timing and to understand the physical factors limiting their performance.
Comparison of UWCC MOX fuel measurements to MCNP-REN calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.; Baker, M.; Jie, R.
1998-12-31
The development of neutron coincidence counting has greatly improved the accuracy and versatility of neutron-based techniques to assay fissile materials. Today, the shift register analyzer connected to either a passive or active neutron detector is widely used by both domestic and international safeguards organizations. The continued development of these techniques and detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model, as it is currently used, fails to accurately predict detector response in highly multiplying mediums such as mixed-oxide (MOX) lightmore » water reactor fuel assemblies. For this reason, efforts have been made to modify the currently used Monte Carlo codes and to develop new analytical methods so that this model is not required to predict detector response. The authors describe their efforts to modify a widely used Monte Carlo code for this purpose and also compare calculational results with experimental measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Robert Cameron; Steiner, Don
2004-06-15
The generation of runaway electrons during a thermal plasma disruption is a concern for the safe and economical operation of a tokamak power system. Runaway electrons have high energy, 10 to 300 MeV, and may potentially cause extensive damage to plasma-facing components (PFCs) through large temperature increases, melting of metallic components, surface erosion, and possible burnout of coolant tubes. The EPQ code system was developed to simulate the thermal response of PFCs to a runaway electron impact. The EPQ code system consists of several parts: UNIX scripts that control the operation of an electron-photon Monte Carlo code to calculate themore » interaction of the runaway electrons with the plasma-facing materials; a finite difference code to calculate the thermal response, melting, and surface erosion of the materials; a code to process, scale, transform, and convert the electron Monte Carlo data to volumetric heating rates for use in the thermal code; and several minor and auxiliary codes for the manipulation and postprocessing of the data. The electron-photon Monte Carlo code used was Electron-Gamma-Shower (EGS), developed and maintained by the National Research Center of Canada. The Quick-Therm-Two-Dimensional-Nonlinear (QTTN) thermal code solves the two-dimensional cylindrical modified heat conduction equation using the Quickest third-order accurate and stable explicit finite difference method and is capable of tracking melting or surface erosion. The EPQ code system is validated using a series of analytical solutions and simulations of experiments. The verification of the QTTN thermal code with analytical solutions shows that the code with the Quickest method is better than 99.9% accurate. The benchmarking of the EPQ code system and QTTN versus experiments showed that QTTN's erosion tracking method is accurate within 30% and that EPQ is able to predict the occurrence of melting within the proper time constraints. QTTN and EPQ are verified and validated as able to calculate the temperature distribution, phase change, and surface erosion successfully.« less
Magnetic properties of dendrimer structures with different coordination numbers: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Masrour, R.; Jabar, A.
2016-11-01
We investigate the magnetic properties of Cayley trees of large molecules with dendrimer structure using Monte Carlo simulations. The thermal magnetization and magnetic susceptibility of a dendrimer structure are given with different coordination numbers, Z=3, 4, 5 and different generations g=3 and 2. The variation of magnetizations with the exchange interactions and crystal fields have been given of this system. The magnetic hysteresis cycles have been established.
NASA Technical Reports Server (NTRS)
Woo, Myeung-Jouh; Greber, Isaac
1995-01-01
Molecular dynamics simulation is used to study the piston driven shock wave at Mach 1.5, 3, and 10. A shock tube, whose shape is a circular cylinder, is filled with hard sphere molecules having a Maxwellian thermal velocity distribution and zero mean velocity. The piston moves and a shock wave is generated. All collisions are specular, including those between the molecules and the computational boundaries, so that the shock development is entirely causal, with no imposed statistics. The structure of the generated shock is examined in detail, and the wave speed; profiles of density, velocity, and temperature; and shock thickness are determined. The results are compared with published results of other methods, especially the direct simulation Monte-Carlo method. Property profiles are similar to those generated by direct simulation Monte-Carlo method. The shock wave thicknesses are smaller than the direct simulation Monte-Carlo results, but larger than those of the other methods. Simulation of a shock wave, which is one-dimensional, is a severe test of the molecular dynamics method, which is always three-dimensional. A major challenge of the thesis is to examine the capability of the molecular dynamics methods by choosing a difficult task.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
NASA Astrophysics Data System (ADS)
Salimi, E.; Rahighi, J.; Sardari, D.; Mahdavi, S. R.; Lamehi Rachti, M.
2014-12-01
Gas bremsstrahlung is generated in high energy electron storage rings through interaction of the electron beam with the residual gas molecules in vacuum chamber. In this paper, Monte Carlo calculation has been performed to evaluate radiation hazard due to gas bremsstrahlung in the Iranian Light Source Facility (ILSF) insertion devices. Shutter/stopper dimensions is determined and dose rate from the photoneutrons via the giant resonance photonuclear reaction which takes place inside the shutter/stopper is also obtained. Some other characteristics of gas bremsstrahlung such as photon fluence, energy spectrum, angular distribution and equivalent dose in tissue equivalent phantom have also been investigated by FLUKA Monte Carlo code.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Monte Carlo Simulation of a Segmented Detector for Low-Energy Electron Antineutrinos
NASA Astrophysics Data System (ADS)
Qomi, H. Akhtari; Safari, M. J.; Davani, F. Abbasi
2017-11-01
Detection of low-energy electron antineutrinos is of importance for several purposes, such as ex-vessel reactor monitoring, neutrino oscillation studies, etc. The inverse beta decay (IBD) is the interaction that is responsible for detection mechanism in (organic) plastic scintillation detectors. Here, a detailed study will be presented dealing with the radiation and optical transport simulation of a typical segmented antineutrino detector withMonte Carlo method using MCNPX and FLUKA codes. This study shows different aspects of the detector, benefiting from inherent capabilities of the Monte Carlo simulation codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Dielectric response of periodic systems from quantum Monte Carlo calculations.
Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola
2005-11-11
We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.
Liu, Jiali; Yang, Qunyu; Bai, Yunxiang; Cao, Zhen
2014-01-01
A fluorescence telescope tower array has been designed to measure cosmic rays in the energy range of 1017–1018 eV. A full Monte Carlo simulation, including air shower production, light generation and propagation, detector response, electronics, and trigger system, has been developed for that purpose. Using such a simulation tool, the detector configuration, which includes one main tower array and two side-trigger arrays, 24 telescopes in total, has been optimized. The aperture and the event rate have been estimated. Furthermore, the performance of the X max technique in measuring composition has also been studied. PMID:24737964
NASA Technical Reports Server (NTRS)
Truscello, V.
1972-01-01
A major concern in the integration of a radioisotope thermoelectric generator (RTG) with a spacecraft designed to explore the outer planets is the effect of the emitted radiation on the normal operation of scientific instruments. The necessary techniques and tools developed to allow accurate calculation of the neutron and gamma spectrum emanating from the RTG. The specific sources of radiation were identified and quantified. Monte Carlo techniques are then employed to perform the nuclear transport calculations. The results of these studies are presented. An extensive experimental program was initiated to measure the response of a number of scientific components to the nuclear radiation.
NASA Astrophysics Data System (ADS)
Adriani, O.; Albergo, S.; Auditore, L.; Basti, A.; Berti, E.; Bigongiari, G.; Bonechi, L.; Bonechi, S.; Bongi, M.; Bonvicini, V.; Bottai, S.; Brogi, P.; Carotenuto, G.; Castellini, G.; Cattaneo, P. W.; Daddi, N.; D'Alessandro, R.; Detti, S.; Finetti, N.; Italiano, A.; Lenzi, P.; Maestro, P.; Marrocchesi, P. S.; Mori, N.; Orzan, G.; Olmi, M.; Pacini, L.; Papini, P.; Pellegriti, M. G.; Rappoldi, A.; Ricciarini, S.; Sciuto, A.; Spillantini, P.; Starodubtsev, O.; Stolzi, F.; Suh, J. E.; Sulaj, A.; Tiberio, A.; Tricomi, A.; Trifiro', A.; Trimarchi, M.; Vannuccini, E.; Zampa, G.; Zampa, N.
2017-11-01
The direct detection of high-energy cosmic rays up to the PeV region is one of the major challenges for the next generation of space-borne cosmic-ray detectors. The physics performance will be primarily determined by their geometrical acceptance and energy resolution. CaloCube is a homogeneous calorimeter whose geometry allows an almost isotropic response, so as to detect particles arriving from every direction in space, thus maximizing the acceptance. A comparative study of different scintillating materials and mechanical structures has been performed by means of Monte Carlo simulation. The scintillation-Cherenkov dual read-out technique has been also considered and its benefit evaluated.
Byrne, Patrick; Mostafaei, Farshad; Liu, Yingzi; Blake, Scott P; Koltick, David; Nie, Linda H
2016-05-01
The feasibility and methodology of using a compact DD generator-based neutron activation analysis system to measure aluminum in hand bone has been investigated. Monte Carlo simulations were used to simulate the moderator, reflector, and shielding assembly and to estimate the radiation dose. A high purity germanium (HPGe) detector was used to detect the Al gamma ray signals. The minimum detectable limit (MDL) was found to be 11.13 μg g(-1) dry bone (ppm). An additional HPGe detector would improve the MDL by a factor of 1.4, to 7.9 ppm. The equivalent dose delivered to the irradiated hand was calculated by Monte Carlo to be 11.9 mSv. In vivo bone aluminum measurement with the DD generator was found to be feasible among general population with an acceptable dose to the subject.
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
Can neutrino mass be deduced from beta particle spectrum?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semkow, T.M.
1993-12-31
With 17-keV neutrino faith being uncertain, it is important to examine the effects of detector resolution and response on the detection limits of massive neutrino. The authors use Fermi theory and generate by Monte Carlo up to 5-10{sup 9} {beta}{sup {minus}} decay events from {sup 35}S. The {beta}{sup {minus}} spectra are then resolved by {chi}{sup 2} minimization. We show that given high statistics and accurate knowledge of the response function it should be possible to detect neutrino mass with a proportional detector, particularly with the gas-scintillation proportional detector, in addition to semiconductor, in addition to semiconductor detectors. This paper presentsmore » a design of double-chamber Xe gas-scintillation proportional detector in which the backscattering effects are suppressed. However, even the slight uncertainties in the response functions as well as {approximately} 10{sup {minus}3} relative energy nonlinearities in the {beta}{sup {minus}} spectrum may create an artificial effect of neutrino mass.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Pothoczki, Szilvia; Temleitner, László; Pusztai, László
2014-02-07
Synchrotron X-ray diffraction measurements have been conducted on liquid phosphorus trichloride, tribromide, and triiodide. Molecular Dynamics simulations for these molecular liquids were performed with a dual purpose: (1) to establish whether existing intermolecular potential functions can provide a picture that is consistent with diffraction data and (2) to generate reliable starting configurations for subsequent Reverse Monte Carlo modelling. Structural models (i.e., sets of coordinates of thousands of atoms) that were fully consistent with experimental diffraction information, within errors, have been prepared by means of the Reverse Monte Carlo method. Comparison with reference systems, generated by hard sphere-like Monte Carlo simulations, was also carried out to demonstrate the extent to which simple space filling effects determine the structure of the liquids (and thus, also estimating the information content of measured data). Total scattering structure factors, partial radial distribution functions and orientational correlations as a function of distances between the molecular centres have been calculated from the models. In general, more or less antiparallel arrangements of the primary molecular axes that are found to be the most favourable orientation of two neighbouring molecules. In liquid PBr3 electrostatic interactions seem to play a more important role in determining intermolecular correlations than in the other two liquids; molecular arrangements in both PCl3 and PI3 are largely driven by steric effects.
Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F
2002-02-01
This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.
Importance Sampling of Word Patterns in DNA and Protein Sequences
Chan, Hock Peng; Chen, Louis H.Y.
2010-01-01
Abstract Monte Carlo methods can provide accurate p-value estimates of word counting test statistics and are easy to implement. They are especially attractive when an asymptotic theory is absent or when either the search sequence or the word pattern is too short for the application of asymptotic formulae. Naive direct Monte Carlo is undesirable for the estimation of small probabilities because the associated rare events of interest are seldom generated. We propose instead efficient importance sampling algorithms that use controlled insertion of the desired word patterns on randomly generated sequences. The implementation is illustrated on word patterns of biological interest: palindromes and inverted repeats, patterns arising from position-specific weight matrices (PSWMs), and co-occurrences of pairs of motifs. PMID:21128856
Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical Networks
Yang, Jin; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.
2009-01-01
We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. PMID:18851068
Top Quark Mass Calibration for Monte Carlo Event Generators.
Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W
2016-12-02
The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator m_{t}^{MC}. Because of hadronization and parton-shower dynamics, relating m_{t}^{MC} to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e^{+}e^{-} 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, m_{t}^{MC} differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m_{t}^{MC}≃m_{t,1 GeV}^{MSR}.
Study of the CP-violating effects with gg → Η → τ{sup +}τ{sup –} process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belyaev, N. L., E-mail: nbelyaev@cern.ch; Konoplich, R. V.
Study of the gg → Η → τ{sup +}τ{sup –} process was performed at Monte Carlo level within the framework of searching for CP-violating effects. The sensitivity of chosen observables to CP-parity of the Higgs boson was demonstrated for hadronic 1-prong τ decays (τ{sup ±} → π{sup ±}, ρ{sup ±}). Monte Carlo samples for the gg → Η → τ{sup +}τ{sup -} process were generated including the parton hadronisation to final state particles. This generation was performed for the Standard Model Higgs boson, the pseudoscalar Higgs boson, the Z → τ{sup +}τ{sup –} background, and mixed CP-states of the Higgsmore » boson.« less
Tooth enamel dosimetric response to 2.8 MeV neutrons
NASA Astrophysics Data System (ADS)
Fattibene, P.; Angelone, M.; Pillon, M.; De Coste, V.
2003-03-01
Tooth enamel dosimetry, based on electron paramagnetic resonance (EPR) spectroscopy, is recognized as a powerful method for individual retrospective dose assessment. The method is mainly used for individual dose reconstruction in the epidemiological studies aimed at the radiation risk analysis. The study of the sensitivity of tooth enamel as a function of radiation quality is one of the main goals of the research in this field. In the present work, tooth enamel dose response in a monoenergetic neutron flux of 2.8 MeV, generated by the D-D reaction, was studied for in air and in phantom irradiations of enamel samples and of whole teeth. EPR measurements were complemented by Monte Carlo calculation and by gamma dose discrimination obtained with thermoluminescent and Geiger-Muller tube measurements. The 2.8 MeV neutrons to 60Co relative sensitivity was 0.33±0.08.
Measuring multielectron beam imaging fidelity with a signal-to-noise ratio analysis
NASA Astrophysics Data System (ADS)
Mukhtar, Maseeh; Bunday, Benjamin D.; Quoi, Kathy; Malloy, Matt; Thiel, Brad
2016-07-01
Java Monte Carlo Simulator for Secondary Electrons (JMONSEL) simulations are used to generate expected imaging responses of chosen test cases of patterns and defects with the ability to vary parameters for beam energy, spot size, pixel size, and/or defect material and form factor. The patterns are representative of the design rules for an aggressively scaled FinFET-type design. With these simulated images and resulting shot noise, a signal-to-noise framework is developed, which relates to defect detection probabilities. Additionally, with this infrastructure, the effect of detection chain noise and frequency-dependent system response can be made, allowing for targeting of best recipe parameters for multielectron beam inspection validation experiments. Ultimately, these results should lead to insights into how such parameters will impact tool design, including necessary doses for defect detection and estimations of scanning speeds for achieving high throughput for high-volume manufacturing.
Compact first and second order polarization mode dispersion emulator
NASA Astrophysics Data System (ADS)
Zhang, Yang; Li, Shiguang; Yang, Changxi
2005-08-01
We propose a 1st and 2nd order polarization mode dispersion emulator (PMDE) with one variable differential group delay (DGD) element using birefringence crystals and four polarization controllers (PCs). Monte Carlo simulations demonstrate that the output 1st and 2nd order polarization mode dispersion (PMD) generated by the PMDE consists with statistic theory. Compared with former PMDEs, this design is tunable, lower-cost, and more integrated for fabrication, which shows response time of 150 ?s, response frequency of 3.8 kHz, working wavelength of 1550 nm, total power consumption of less than 3 W, working range of 0---84 ps and 0---3600 ps2 for 1st and 2nd order PMD emulation, respectively. Also, it is programmable and can be controlled by either singlechip or computer. It can be applied to study the outage probability of optical communication systems due to PMD effect and the effectiveness of PMD compensation.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrisson, G.; Marleau, G.
2012-07-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less
NASA Astrophysics Data System (ADS)
Petit, Odile; Jouanne, Cédric; Litaize, Olivier; Serot, Olivier; Chebboubi, Abdelhazize; Pénéliau, Yannick
2017-09-01
TRIPOLI-4® Monte Carlo transport code and FIFRELIN fission model have been coupled by means of external files so that neutron transport can take into account fission distributions (multiplicities and spectra) that are not averaged, as is the case when using evaluated nuclear data libraries. Spectral effects on responses in shielding configurations with fission sampling are then expected. In the present paper, the principle of this coupling is detailed and a comparison between TRIPOLI-4® fission distributions at the emission of fission neutrons is presented when using JEFF-3.1.1 evaluated data or FIFRELIN data generated either through a n/g-uncoupled mode or through a n/g-coupled mode. Finally, an application to a modified version of the ASPIS benchmark is performed and the impact of using FIFRELIN data on neutron transport is analyzed. Differences noticed on average reaction rates on the surfaces closest to the fission source are mainly due to the average prompt fission spectrum. Moreover, when working with the same average spectrum, a complementary analysis based on non-average reaction rates still shows significant differences that point out the real impact of using a fission model in neutron transport simulations.
NASA Astrophysics Data System (ADS)
Muraro, S.; Battistoni, G.; Belcari, N.; Bisogni, M. G.; Camarlinghi, N.; Cristoforetti, L.; Del Guerra, A.; Ferrari, A.; Fracchiolla, F.; Morrocchi, M.; Righetto, R.; Sala, P.; Schwarz, M.; Sportelli, G.; Topi, A.; Rosso, V.
2017-12-01
Ion beam irradiations can deliver conformal dose distributions minimizing damage to healthy tissues thanks to their characteristic dose profiles. Nevertheless, the location of the Bragg peak can be affected by different sources of range uncertainties: a critical issue is the treatment verification. During the treatment delivery, nuclear interactions between the ions and the irradiated tissues generate β+ emitters: the detection of this activity signal can be used to perform the treatment monitoring if an expected activity distribution is available for comparison. Monte Carlo (MC) codes are widely used in the particle therapy community to evaluate the radiation transport and interaction with matter. In this work, FLUKA MC code was used to simulate the experimental conditions of irradiations performed at the Proton Therapy Center in Trento (IT). Several mono-energetic pencil beams were delivered on phantoms mimicking human tissues. The activity signals were acquired with a PET system (DoPET) based on two planar heads, and designed to be installed along the beam line to acquire data also during the irradiation. Different acquisitions are analyzed and compared with the MC predictions, with a special focus on validating the PET detectors response for activity range verification.
Zhang, Yan; Jia, WenBao; Gardner, Robin; Shan, Qing; Hei, Daqian
2018-02-01
In the present work, a prompt gamma neutron activation analysis (PGNAA) setup, which consists of a 300mCi 241 Americium-Beryllium (Am-Be) neutron source and a 4 × 4-in. Bismuth germanium oxide (BGO) detector, was developed for heavy metal detection in aqueous solutions. A series of standard samples with analytical purity were prepared by dissolving heavy metals in deionized water. Quantitative spectrum analysis was performed by the Monte Carlo-Least-Squares (MCLLS) approach to measure the standard samples. The detector response functions of 4 × 4-in. BGO detector were generated by using the CEARDRF code. The element libraries were simulated in silico by the CEARCPG code, which was developed by Dr. Gardner. The simulation results presented were in very good agreement with the experimental results. The correlation coefficients were very close to 1 when the fitted spectrum was compared with the experimental spectrum. By applying the MCLLS approach, the relative deviation of the measurement accuracy was less than 2.27% for Ni, Mn, and Cu and up to 69.33% for Pb. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
Othman, M A R; Cutajar, D L; Hardcastle, N; Guatelli, S; Rosenfeld, A B
2010-09-01
Monte Carlo simulations of the energy response of a conventionally packaged single metal-oxide field effect transistors (MOSFET) detector were performed with the goal of improving MOSFET energy dependence for personal accident or military dosimetry. The MOSFET detector packaging was optimised. Two different 'drop-in' design packages for a single MOSFET detector were modelled and optimised using the GEANT4 Monte Carlo toolkit. Absorbed photon dose simulations of the MOSFET dosemeter placed in free-air response, corresponding to the absorbed doses at depths of 0.07 mm (D(w)(0.07)) and 10 mm (D(w)(10)) in a water equivalent phantom of size 30 x 30 x 30 cm(3) for photon energies of 0.015-2 MeV were performed. Energy dependence was reduced to within + or - 60 % for photon energies 0.06-2 MeV for both D(w)(0.07) and D(w)(10). Variations in the response for photon energies of 15-60 keV were 200 and 330 % for D(w)(0.07) and D(w)(10), respectively. The obtained energy dependence was reduced compared with that for conventionally packaged MOSFET detectors, which usually exhibit a 500-700 % over-response when used in free-air geometry.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less
Mangado, Nerea; Pons-Prats, Jordi; Coma, Martí; Mistrík, Pavel; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Á
2018-01-01
Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process.
Bohm, Tim D; Griffin, Sheridan L; DeLuca, Paul M; DeWerd, Larry A
2005-04-01
The determination of the air kerma strength of a brachytherapy seed is necessary for effective treatment planning. Well ionization chambers are used on site at therapy clinics to determine the air kerma strength of seeds. In this work, the response of the Standard Imaging HDR 1000 Plus well chamber to ambient pressure is examined using Monte Carlo calculations. The experimental work examining the response of this chamber as well as other chambers is presented in a companion paper. The Monte Carlo results show that for low-energy photon sources, the application of the standard temperature pressure PTP correction factor produces an over-response at the reduced air densities/pressures corresponding to high elevations. With photon sources of 20 to 40 keV, the normalized PTP corrected chamber response is as much as 10% to 20% over unity for air densities/pressures corresponding to an elevation of 3048 m (10000 ft) above sea level. At air densities corresponding to an elevation of 1524 m (5000 ft), the normalized PTP-corrected chamber response is 5% to 10% over unity for these photon sources. With higher-energy photon sources (>100 keV), the normalized PTP corrected chamber response is near unity. For low-energy beta sources of 0.25 to 0.50 MeV, the normalized PTP-corrected chamber response is as much as 4% to 12% over unity for air densities/pressures corresponding to an elevation of 3048 m (10000 ft) above sea level. Higher-energy beta sources (>0.75 MeV) have a normalized PTP corrected chamber response near unity. Comparing calculated and measured chamber responses for common 103Pd- and 125I-based brachytherapy seeds show agreement to within 2.7% and 1.9%, respectively. Comparing MCNP calculated chamber responses with EGSnrc calculated chamber responses show agreement to within 3.1% at photon energies of 20 to 40 keV. We conclude that Monte Carlo transport calculations accurately model the response of this well chamber. Further, applying the standard PTP correction factor for this well chamber is insufficient in accounting for the change in chamber response with air pressure for low-energy (<100 keV) photon and low-energy (<0.75 MeV)beta sources.
Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul
2011-01-01
Recently there have been many studies of power systems with a focus on "New and Renewable Energy" as part of "New Growth Engine Industry" promoted by the Korean government. "New And Renewable Energy"-especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels-is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI's IntelliGrid research program. The European Union (EU), which represents Europe's Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2013-03-02
The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of √s = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K s and Λ particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scalemore » uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2–5 % for central isolated hadrons and 1–3 % for the final calorimeter jet energy scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less
Characterizing Quality Factor of Niobium Resonators Using a Markov Chain Monte Carlo Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu Thakur, Ritoban; Tang, Qing Yang; McGeehan, Ryan
The next generation of radiation detectors in high precision Cosmology, Astronomy, and particle-astrophysics experiments will rely heavily on superconducting microwave resonators and kinetic inductance devices. Understanding the physics of energy loss in these devices, in particular at low temperatures and powers, is vital. We present a comprehensive analysis framework, using Markov Chain Monte Carlo methods, to characterize loss due to two-level system in concert with quasi-particle dynamics in thin-film Nb resonators in the GHz range.
Toward centrality determination at NICA/MPD
NASA Astrophysics Data System (ADS)
Galoyan, A. S.; Uzhinsky, V. V.
2017-03-01
Geometrical properties of nucleus-nucleus interactions at various centralities are calculated for the NICA energy range. A modified version of the Glauber Monte Carlo simulation code has been used for the calculations. It is shown that the geometrical properties of nucleus-nucleus interactions at the energies 5 - 10 GeV (NICA/MPD) and at energy 200 GeV (RHIC) are quite close to each other. A possible determination of centrality at NICA/MPD experiment using calculations of various Monte Carlo event generators are considered.
2014-09-01
information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD...MM-YYYY) September 2014 2. REPORT TYPE Final 3. DATES COVERED (From - To) January–July 2014 4. TITLE AND SUBTITLE Monte Carlo Evaluation of...generated is typical of energy harvesting levels of power.11 Radioisotope power sources differ from typical renewable energy/power levels in that they
Trace-fossil and storm-deposit relationships of San Carlos formation, west Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metz, C.L.; Bednarski, S.P.
1986-05-01
Two distinct assemblages of trace fossils are preserved in the storm deposits in delta-front facies of the Upper Cretaceous San Carlos Formation, west Texas. The assemblages represent two widely differing responses to storm deposition and sediment-trace-fossil relationships, indicating that other environmental parameters, probably water depth and oxygen levels, influenced trace-fossil distribution within the San Carlos delta front. Evidence of the storm-deposited nature of the sandstones includes a scoured basal contact, planar to hummocky cross-stratification, and a upper contact that is either ripple marked or is gradational with overlying shales.
Fixed forced detection for fast SPECT Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.
2018-03-01
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
Fixed forced detection for fast SPECT Monte-Carlo simulation.
Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D
2018-03-02
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Stochastic generation of hourly rainstorm events in Johor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli
2015-02-03
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was usedmore » in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.« less
ERIC Educational Resources Information Center
Iared, Valéria Ghisloti; de Oliveira, Haydée Torres
2012-01-01
To investigate if the units of the São Carlos Ecological Pole (São Carlos, São Paulo, Brazil) are educating spaces that may contribute to the understanding of the complexity of environmental issues and stimulate a sense of belonging and social responsibility, we interviewed primary school teachers who had accompanied visits to these places and…
Haiti Earthquake: Crisis and Response
2010-02-19
Special Representative Hedi Annabi, and his deputy, Luiz Carlos da Costa, were among the dead. U.N. Secretary General Ban Ki-moon sent Assistant...development strategy, including security; judicial reform; macroeconomic management; procurement processes and fiscal transparency; increased voter...but 101 are confirmed dead, with 6 unaccounted for.17 The head of MINUSTAH, Special Representative Hedi Annabi and his deputy, Luiz Carlos da Costa
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
Neutrality and evolvability of designed protein sequences
NASA Astrophysics Data System (ADS)
Bhattacherjee, Arnab; Biswas, Parbati
2010-07-01
The effect of foldability on protein’s evolvability is analyzed by a two-prong approach consisting of a self-consistent mean-field theory and Monte Carlo simulations. Theory and simulation models representing protein sequences with binary patterning of amino acid residues compatible with a particular foldability criteria are used. This generalized foldability criterion is derived using the high temperature cumulant expansion approximating the free energy of folding. The effect of cumulative point mutations on these designed proteins is studied under neutral condition. The robustness, protein’s ability to tolerate random point mutations is determined with a selective pressure of stability (ΔΔG) for the theory designed sequences, which are found to be more robust than that of Monte Carlo and mean-field-biased Monte Carlo generated sequences. The results show that this foldability criterion selects viable protein sequences more effectively compared to the Monte Carlo method, which has a marked effect on how the selective pressure shapes the evolutionary sequence space. These observations may impact de novo sequence design and its applications in protein engineering.
Neutron matter with Quantum Monte Carlo: chiral 3N forces and static response
Buraczynski, M.; Gandolfi, S.; Gezerlis, A.; ...
2016-03-14
Neutron matter is related to the physics of neutron stars and that of neutron-rich nuclei. Moreover, Quantum Monte Carlo (QMC) methods offer a unique way of solving the many-body problem non-perturbatively, providing feedback on features of nuclear interactions and addressing scenarios that are inaccessible to other approaches. Our contribution goes over two recent accomplishments in the theory of neutron matter: a) the fusing of QMC with chiral effective field theory interactions, focusing on local chiral 3N forces, and b) the first attempt to find an ab initio solution to the problem of static response.
Anosov C-systems and random number generators
NASA Astrophysics Data System (ADS)
Savvidy, G. K.
2016-08-01
We further develop our previous proposal to use hyperbolic Anosov C-systems to generate pseudorandom numbers and to use them for efficient Monte Carlo calculations in high energy particle physics. All trajectories of hyperbolic dynamical systems are exponentially unstable, and C-systems therefore have mixing of all orders, a countable Lebesgue spectrum, and a positive Kolmogorov entropy. These exceptional ergodic properties follow from the C-condition introduced by Anosov. This condition defines a rich class of dynamical systems forming an open set in the space of all dynamical systems. An important property of C-systems is that they have a countable set of everywhere dense periodic trajectories and their density increases exponentially with entropy. Of special interest are the C-systems defined on higher-dimensional tori. Such C-systems are excellent candidates for generating pseudorandom numbers that can be used in Monte Carlo calculations. An efficient algorithm was recently constructed that allows generating long C-system trajectories very rapidly. These trajectories have good statistical properties and can be used for calculations in quantum chromodynamics and in high energy particle physics.
Top Quark Mass Calibration for Monte Carlo Event Generators
Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; ...
2016-11-29
The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mmore » $$MC\\atop{t}$$. Because of hadronization and parton-shower dynamics, relating m$$MC\\atop{t}$$ to a field theory mass is difficult. Here, we present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e +e −2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to PYTHIA 8.205, m$$MC\\atop{t}$$ differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m$$MC\\atop{t}$$ ≃ m$$MSR\\atop{t,1 GeV}$$.« less
Simpkin, D J
1989-02-01
A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP.
Calculating Potential Energy Curves with Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2014-06-01
Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
Trade Space Analysis: Rotational Analyst Research Project
2015-09-01
POM Program Objective Memoranda PM Program Manager RFP Request for Proposal ROM Rough Order Magnitude RSM Response Surface Method RSE ...response surface method (RSM) / response surface equations ( RSEs ) as surrogate models. It uses the RSEs with Monte Carlo simulation to quantitatively
NASA Astrophysics Data System (ADS)
Golonka, P.; Pierzchała, T.; Waş, Z.
2004-02-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events. Program summaryTitle of the program:MC-TESTER, version 1.1 Catalogue identifier: ADSM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAM Operating system: Linux Red Hat 6.1, 7.2, and also 8.0 Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77 Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB). No. of bytes in distributed program, including test data, etc.: 2 024 425 Distribution format: tar gzip file Additional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet). Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparison Nature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties. Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Restrictions on the complexity of the problem: For a list of limitations see Section 6. Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; ? processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines. Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
NASA Astrophysics Data System (ADS)
Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.
2015-03-01
The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.
NASA Astrophysics Data System (ADS)
Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.
2014-01-01
Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Coupled particle-in-cell and Monte Carlo transport modeling of intense radiographic sources
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Oliver, B. V.; Clark, R. E.; Johnson, D. L.; Maenchen, J. E.; Menge, P. R.; Olson, C. L.; Rovang, D. C.
2002-03-01
Dose-rate calculations for intense electron-beam diodes using particle-in-cell (PIC) simulations along with Monte Carlo electron/photon transport calculations are presented. The electromagnetic PIC simulations are used to model the dynamic operation of the rod-pinch and immersed-B diodes. These simulations include algorithms for tracking electron scattering and energy loss in dense materials. The positions and momenta of photons created in these materials are recorded and separate Monte Carlo calculations are used to transport the photons to determine the dose in far-field detectors. These combined calculations are used to determine radiographer equations (dose scaling as a function of diode current and voltage) that are compared directly with measured dose rates obtained on the SABRE generator at Sandia National Laboratories.
Combined experimental and Monte Carlo verification of
brachytherapy plans for vaginal applicators
NASA Astrophysics Data System (ADS)
Sloboda, Ron S.; Wang, Ruqing
1998-12-01
Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate
sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.
NASA Astrophysics Data System (ADS)
Haruki, W.; Iseri, Y.; Takegawa, S.; Sasaki, O.; Yoshikawa, S.; Kanae, S.
2016-12-01
Natural disasters caused by heavy rainfall occur every year in Japan. Effective countermeasures against such events are important. In 2015, a catastrophic flood occurred in Kinu river basin, which locates in the northern part of Kanto region. The remarkable feature of this flood event was not only in the intensity of rainfall but also in the spatial characteristics of heavy rainfall area. The flood was caused by continuous overlapping of heavy rainfall area over the Kinu river basin, suggesting consideration of spatial extent is quite important to assess impacts of heavy rainfall events. However, the spatial extent of heavy rainfall events cannot be properly measured through rainfall measurement by rain gauges at observation points. On the other hand, rainfall measurements by radar observations provide spatially and temporarily high resolution rainfall data which would be useful to catch the characteristics of heavy rainfall events. For long term effective countermeasure, extreme heavy rainfall scenario considering rainfall area and distribution is required. In this study, a new method for generating extreme heavy rainfall events using Monte Carlo Simulation has been developed in order to produce extreme heavy rainfall scenario. This study used AMeDAS analyzed precipitation data which is high resolution grid precipitation data made by Japan Meteorological Agency. Depth area duration (DAD) analysis has been conducted to extract extreme rainfall events in the past, considering time and spatial scale. In the Monte Carlo Simulation, extreme rainfall event is generated based on events extracted by DAD analysis. Extreme heavy rainfall events are generated in specific region in Japan and the types of generated extreme heavy rainfall events can be changed by varying the parameter. For application of this method, we focused on Kanto region in Japan. As a result, 3000 years rainfall data are generated. 100 -year probable rainfall and return period of flood in Kinu River Basin (2015) are obtained using generated data. We compared 100-year probable rainfall calculated by this method with other traditional method. New developed method enables us to generate extreme rainfall events considering time and spatial scale and produce extreme rainfall scenario.
Monte Carlo calculations for reporting patient organ doses from interventional radiology
NASA Astrophysics Data System (ADS)
Huo, Wanli; Feng, Mang; Pi, Yifei; Chen, Zhi; Gao, Yiming; Xu, X. George
2017-09-01
This paper describes a project to generate organ dose data for the purposes of extending VirtualDose software from CT imaging to interventional radiology (IR) applications. A library of 23 mesh-based anthropometric patient phantoms were involved in Monte Carlo simulations for database calculations. Organ doses and effective doses of IR procedures with specific beam projection, filed of view (FOV) and beam quality for all parts of body were obtained. Comparing organ doses for different beam qualities, beam projections, patients' ages and patient's body mass indexes (BMIs) which generated by VirtualDose-IR, significant discrepancies were observed. For relatively long time exposure, IR doses depend on beam quality, beam direction and patient size. Therefore, VirtualDose-IR, which is based on the latest anatomically realistic patient phantoms, can generate accurate doses for IR treatment. It is suitable to apply this software in clinical IR dose management as an effective tool to estimate patient doses and optimize IR treatment plans.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator
NASA Astrophysics Data System (ADS)
Bol, G. H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.
2012-03-01
The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Using Data Augmentation and Markov Chain Monte Carlo for the Estimation of Unfolding Response Models
ERIC Educational Resources Information Center
Johnson, Matthew S.; Junker, Brian W.
2003-01-01
Unfolding response models, a class of item response theory (IRT) models that assume a unimodal item response function (IRF), are often used for the measurement of attitudes. Verhelst and Verstralen (1993)and Andrich and Luo (1993) independently developed unfolding response models by relating the observed responses to a more common monotone IRT…
ERIC Educational Resources Information Center
Matthews-Lopez, Joy L.; Hombo, Catherine M.
The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W.
An account is given of the method used to quantify the risks accruing to the use of a radioisotope thermoelectric generator fueled by Pu-238 dioxide aboard the Space Shuttle-launched Ulysses mission. After using a Monte Carlo technique to develop probability distributions for the radiological consequences of a range of accident scenarios throughout the mission, factors affecting those consequences are identified in conjunction with their probability distributions. The functional relationship among all the factors is then established, and probability distributions for all factor effects are combined by means of a Monte Carlo technique.
2010-01-01
respectively. Conformations for all three systems were generated by exhaustive Monte Carlo searching. Relative conformational energies were calculated at the...routines of the Maestro(v. 6.5)/ Macromodel-Batchmin(8.6)21 suite of programs. The number of Monte Carlo steps for the searches was 500 000. Energy ...set using the B3LYP30,31 hybrid density functional. Single-point energies at the MP2/ aug-cc-pVDZ and MP2/aug-cc-pVTZ levels of theory were obtained
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
Indoor Fast Neutron Generator for Biophysical and Electronic Applications
NASA Astrophysics Data System (ADS)
Cannuli, A.; Caccamo, M. T.; Marchese, N.; Tomarchio, E. A.; Pace, C.; Magazù, S.
2018-05-01
This study focuses the attention on an indoor fast neutron generator for biophysical and electronic applications. More specifically, the findings obtained by several simulations with the MCNP Monte Carlo code, necessary for the realization of a shield for indoor measurements, are presented. Furthermore, an evaluation of the neutron spectrum modification caused by the shielding is reported. Fast neutron generators are a valid and interesting available source of neutrons, increasingly employed in a wide range of research fields, such as science and engineering. The employed portable pulsed neutron source is a MP320 Thermo Scientific neutron generator, able to generate 2.5 MeV neutrons with a neutron yield of 2.0 x 106 n/s, a pulse rate of 250 Hz to 20 KHz and a duty factor varying from 5% to 100%. The neutron generator, based on Deuterium-Deuterium nuclear fusion reactions, is employed in conjunction with a solid-state photon detector, made of n-type high-purity germanium (PINS-GMX by ORTEC) and it is mainly addressed to biophysical and electronic studies. The present study showed a proposal for the realization of a shield necessary for indoor applications for MP320 neutron generator, with a particular analysis of the transport of neutrons simulated with Monte Carlo code and described the two main lines of research in which the source will be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, J; University of Sydney, Sydney; RMIT University, Melbourne
2014-06-01
Purpose: Assess the angular dependence of the nanoDot OSLD system in MV X-ray beams at depths and mitigate this dependence for measurements in phantoms. Methods: Measurements for 6 MV photons at 3 cm and 10 cm depth and Monte Carlo simulations were performed. Two special holders were designed which allow a nanoDot dosimeter to be rotated around the center of its sensitive volume (5 mm diameter disk). The first holder positions the dosimeter disk perpendicular to the beam (en-face). It then rotates until the disk is parallel with the beam (edge on). This is referred to as Setup 1. Themore » second holder positions the disk parallel to the beam (edge on) for all angles (Setup 2). Monte Carlo simulations using GEANT4 considered detector and housing in detail based on microCT data. Results: An average drop in response by 1.4±0.7% (measurement) and 2.1±0.3% (Monte Carlo) for the 90° orientation compared to 0° was found for Setup 1. Monte Carlo simulations also showed a strong dependence of the effect on the composition of the sensitive layer. Assuming 100% active material (Al??O??) results in a 7% drop in response for 90° compared to 0°. Assuming the layer to be completely water, results in a flat response (within simulation uncertainty of about 1%). For Setup 2, measurements and Monte Carlo simulations found the angular dependence of the dosimeter to be below 1% and within the measurement uncertainty. Conclusion: The nanoDot dosimeter system exhibits a small angular dependence off approximately 2%. Changing the orientation of the dosimeter so that a coplanar beam arrangement always hits the detector material edge on reduces the angular dependence to within the measurement uncertainty of about 1%. This makes the dosimeter more attractive for phantom based clinical measurements and audits with multiple coplanar beams. The Australian Clinical Dosimetry Service is a joint initiative between the Australian Department of Health and the Australian Radiation Protection and Nuclear Safety Agency.« less
Probabilistic Thermal Analysis During Mars Reconnaissance Orbiter Aerobraking
NASA Technical Reports Server (NTRS)
Dec, John A.
2007-01-01
A method for performing a probabilistic thermal analysis during aerobraking has been developed. The analysis is performed on the Mars Reconnaissance Orbiter solar array during aerobraking. The methodology makes use of a response surface model derived from a more complex finite element thermal model of the solar array. The response surface is a quadratic equation which calculates the peak temperature for a given orbit drag pass at a specific location on the solar panel. Five different response surface equations are used, one of which predicts the overall maximum solar panel temperature, and the remaining four predict the temperatures of the solar panel thermal sensors. The variables used to define the response surface can be characterized as either environmental, material property, or modeling variables. Response surface variables are statistically varied in a Monte Carlo simulation. The Monte Carlo simulation produces mean temperatures and 3 sigma bounds as well as the probability of exceeding the designated flight allowable temperature for a given orbit. Response surface temperature predictions are compared with the Mars Reconnaissance Orbiter flight temperature data.
Shu, Di-Yun; Geng, Chang-Ran; Tang, Xiao-Bin; Gong, Chun-Hui; Shao, Wen-Cheng; Ai, Yao
2018-07-01
This paper was aimed to explore the physics of Cherenkov radiation and its potential application in boron neutron capture therapy (BNCT). The Monte Carlo toolkit Geant4 was used to simulate the interaction between the epithermal neutron beam and the phantom containing boron-10. Results showed that Cherenkov photons can only be generated from secondary charged particles of gamma rays in BNCT, in which the 2.223 MeV prompt gamma rays are the main contributor. The number of Cherenkov photons per unit mass generated in the measurement region decreases linearly with the increase of boron concentration in both water and tissue phantom. The work presented the fundamental basis for applications of Cherenkov radiation in BNCT. Copyright © 2018 Elsevier Ltd. All rights reserved.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Sidorov, V. G.; Zelenkov, P. V.; Khoroshko, A. Y.; Lelekov, A. T.
2015-10-01
To optimize parameters of beta-electrical converter of isotope Nickel-63 radiation, model of the distribution of EHP generation rate in semiconductor must be derived. By using Monte-Carlo methods in GEANT4 system with ultra-low energy electron physics models this distribution in silicon calculated and approximated with Gauss function. Maximal efficient isotope layer thickness and maximal energy efficiency of EHP generation were estimated.
Filipino Insurgencies (1899-1913): Failures to Incite Popular Support
2016-04-08
Wars of Peace, 6. 5 Richard E . Welch, Response to Imperialism: The United States and the Philippine- American War, 1899-1902 (Chapel Hill, NC...Muslim Filipinos, 1899-1920 (Quezon City, Philippines: New Day Publishing, 1983). 24 Carlos Quirino, Filipinos at War (Philippines: Vera -Reyes, Inc...Carlos. Filipinos at War. Philippines: Vera -Reyes, Inc, 1987. Ramsey, Robert D. Savage Wars of Peace: Case Studies of Pacification in the Philippines
Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming
2007-01-01
This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.
The frozen nucleon approximation in two-particle two-hole response functions
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...
2017-07-10
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
The frozen nucleon approximation in two-particle two-hole response functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
Fraser, Kirk A.; St-Georges, Lyne; Kiss, Laszlo I.
2014-01-01
Recognition of the friction stir welding process is growing in the aeronautical and aero-space industries. To make the process more available to the structural fabrication industry (buildings and bridges), being able to model the process to determine the highest speed of advance possible that will not cause unwanted welding defects is desirable. A numerical solution to the transient two-dimensional heat diffusion equation for the friction stir welding process is presented. A non-linear heat generation term based on an arbitrary piecewise linear model of friction as a function of temperature is used. The solution is used to solve for the temperature distribution in the Al 6061-T6 work pieces. The finite difference solution of the non-linear problem is used to perform a Monte-Carlo simulation (MCS). A polynomial response surface (maximum welding temperature as a function of advancing and rotational speed) is constructed from the MCS results. The response surface is used to determine the optimum tool speed of advance and rotational speed. The exterior penalty method is used to find the highest speed of advance and the associated rotational speed of the tool for the FSW process considered. We show that good agreement with experimental optimization work is possible with this simplified model. Using our approach an optimal weld pitch of 0.52 mm/rev is obtained for 3.18 mm thick AA6061-T6 plate. Our method provides an estimate of the optimal welding parameters in less than 30 min of calculation time. PMID:28788627
Fraser, Kirk A; St-Georges, Lyne; Kiss, Laszlo I
2014-04-30
Recognition of the friction stir welding process is growing in the aeronautical and aero-space industries. To make the process more available to the structural fabrication industry (buildings and bridges), being able to model the process to determine the highest speed of advance possible that will not cause unwanted welding defects is desirable. A numerical solution to the transient two-dimensional heat diffusion equation for the friction stir welding process is presented. A non-linear heat generation term based on an arbitrary piecewise linear model of friction as a function of temperature is used. The solution is used to solve for the temperature distribution in the Al 6061-T6 work pieces. The finite difference solution of the non-linear problem is used to perform a Monte-Carlo simulation (MCS). A polynomial response surface (maximum welding temperature as a function of advancing and rotational speed) is constructed from the MCS results. The response surface is used to determine the optimum tool speed of advance and rotational speed. The exterior penalty method is used to find the highest speed of advance and the associated rotational speed of the tool for the FSW process considered. We show that good agreement with experimental optimization work is possible with this simplified model. Using our approach an optimal weld pitch of 0.52 mm/rev is obtained for 3.18 mm thick AA6061-T6 plate. Our method provides an estimate of the optimal welding parameters in less than 30 min of calculation time.
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Acceleration of Monte Carlo SPECT simulation using convolution-based forced detection
NASA Astrophysics Data System (ADS)
de Jong, H. W. A. M.; Slijpen, E. T. P.; Beekman, F. J.
2001-02-01
Monte Carlo (MC) simulation is an established tool to calculate photon transport through tissue in Emission Computed Tomography (ECT). Since the first appearance of MC a large variety of variance reduction techniques (VRT) have been introduced to speed up these notoriously slow simulations. One example of a very effective and established VRT is known as forced detection (FD). In standard FD the path from the photon's scatter position to the camera is chosen stochastically from the appropriate probability density function (PDF), modeling the distance-dependent detector response. In order to speed up MC the authors propose a convolution-based FD (CFD) which involves replacing the sampling of the PDF by a convolution with a kernel which depends on the position of the scatter event. The authors validated CFD for parallel-hole Single Photon Emission Computed Tomography (SPECT) using a digital thorax phantom. Comparison of projections estimated with CFD and standard FD shows that both estimates converge to practically identical projections (maximum bias 0.9% of peak projection value), despite the slightly different photon paths used in CFD and standard FD. Projections generated with CFD converge, however, to a noise-free projection up to one or two orders of magnitude faster, which is extremely useful in many applications such as model-based image reconstruction.
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
Response functions for neutron skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-02-01
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less
Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul
2011-01-01
Recently there have been many studies of power systems with a focus on “New and Renewable Energy” as part of “New Growth Engine Industry” promoted by the Korean government. “New And Renewable Energy”—especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels—is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI’s IntelliGrid research program. The European Union (EU), which represents Europe’s Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation. PMID:22164047
Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel
2015-12-01
The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chibani, O; Price, R; Ma, C
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375–425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows thatmore » the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375–425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.« less
Glavinovíc, M I
1999-02-01
The release of vesicular glutamate, spatiotemporal changes in glutamate concentration in the synaptic cleft and the subsequent generation of fast excitatory postsynaptic currents at a hippocampal synapse were modeled using the Monte Carlo method. It is assumed that glutamate is released from a spherical vesicle through a cylindrical fusion pore into the synaptic cleft and that S-alpha-amino-3-hydroxy -5-methyl-4-isoxazolepropionic acid (AMPA) receptors are uniformly distributed postsynaptically. The time course of change in vesicular concentration can be described by a single exponential, but a slow tail is also observed though only following the release of most of the glutamate. The time constant of decay increases with vesicular size and a lower diffusion constant, and is independent of the initial concentration, becoming markedly shorter for wider fusion pores. The cleft concentration at the fusion pore mouth is not negligible compared to vesicular concentration, especially for wider fusion pores. Lateral equilibration of glutamate is rapid, and within approximately 50 micros all AMPA receptors on average see the same concentration of glutamate. Nevertheless the single-channel current and the number of channels estimated from mean-variance plots are unreliable and different when estimated from rise- and decay-current segments. Greater saturation of AMPA receptor channels provides higher but not more accurate estimates. Two factors contribute to the variability of postsynaptic currents and render the mean-variance nonstationary analysis unreliable, even when all receptors see on average the same glutamate concentration. Firstly, the variability of the instantaneous cleft concentration of glutamate, unlike the mean concentration, first rapidly decreases before slowly increasing; the variability is greater for fewer molecules in the cleft and is spatially nonuniform. Secondly, the efficacy with which glutamate produces a response changes with time. Understanding the factors that determine the time course of vesicular content release as well as the spatiotemporal changes of glutamate concentration in the cleft is crucial for understanding the mechanism that generates postsynaptic currents.
Alberti, Luca; Colombo, Loris; Formentin, Giovanni
2018-04-15
The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulation of argon response and light detection in the DarkSide-50 dual phase TPC
NASA Astrophysics Data System (ADS)
Agnes, P.; Albuquerque, I. F. M.; Alexander, T.; Alton, A. K.; Asner, D. M.; Back, H. O.; Biery, K.; Bocci, V.; Bonfini, G.; Bonivento, W.; Bossa, M.; Bottino, B.; Budano, F.; Bussino, S.; Cadeddu, M.; Cadoni, M.; Calaprice, F.; Canci, N.; Candela, A.; Caravati, M.; Cariello, M.; Carlini, M.; Catalanotti, S.; Cataudella, V.; Cavalcante, P.; Chepurnov, A.; Cicalò, C.; Cocco, A. G.; Covone, G.; D'Angelo, D.; D'Incecco, M.; Davini, S.; de Candia, A.; De Cecco, S.; De Deo, M.; De Filippis, G.; De Vincenzi, M.; Derbin, A. V.; De Rosa, G.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Dionisi, C.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Franco, D.; Gabriele, F.; Galbiati, C.; Giagu, S.; Giganti, C.; Giovanetti, G. K.; Goretti, A. M.; Granato, F.; Gromov, M.; Guan, M.; Guardincerri, Y.; Hackett, B. R.; Herner, K.; Hughes, D.; Humble, P.; Hungerford, E. V.; Ianni, An.; James, I.; Johnson, T. N.; Keeter, K.; Kendziora, C. L.; Koh, G.; Korablev, D.; Korga, G.; Kubankin, A.; Li, X.; Lissia, M.; Loer, B.; Longo, G.; Ma, Y.; Machado, A. A.; Machulin, I. N.; Mandarano, A.; Mari, S. M.; Maricic, J.; Martoff, C. J.; Meyers, P. D.; Milincic, R.; Monte, A.; Mount, B. J.; Muratova, V. N.; Musico, P.; Napolitano, J.; Navrer Agasson, A.; Oleinik, A.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Pelczar, K.; Pelliccia, N.; Pocar, A.; Pordes, S.; Pugachev, D. A.; Qian, H.; Randle, K.; Razeti, M.; Razeto, A.; Reinhold, B.; Renshaw, A. L.; Rescigno, M.; Riffard, Q.; Romani, A.; Rossi, B.; Rossi, N.; Sablone, D.; Sands, W.; Sanfilippo, S.; Savarese, C.; Schlitzer, B.; Segreto, E.; Semenov, D. A.; Singh, P. N.; Skorokhvatov, M. D.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Tonazzo, A.; Trinchese, P.; Unzhakov, E. V.; Verducci, M.; Vishneva, A.; Vogelaar, B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A. W.; Westerdale, S.; Wilhelmi, J.; Wojcik, M. M.; Xiang, X.; Xiao, X.; Yang, C.; Ye, Z.; Zhu, C.; Zuzel, G.
2017-10-01
A Geant4-based Monte Carlo package named G4DS has been developed to simulate the response of DarkSide-50, an experiment operating since 2013 at LNGS, designed to detect WIMP interactions in liquid argon. In the process of WIMP searches, DarkSide-50 has achieved two fundamental milestones: the rejection of electron recoil background with a power of ~107, using the pulse shape discrimination technique, and the measurement of the residual 39Ar contamination in underground argon, ~3 orders of magnitude lower with respect to atmospheric argon. These results rely on the accurate simulation of the detector response to the liquid argon scintillation, its ionization, and electron-ion recombination processes. This work provides a complete overview of the DarkSide Monte Carlo and of its performance, with a particular focus on PARIS, the custom-made liquid argon response model.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Brualla, L.
2018-04-01
Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
A novel Kinetic Monte Carlo algorithm for Non-Equilibrium Simulations
NASA Astrophysics Data System (ADS)
Jha, Prateek; Kuzovkov, Vladimir; Grzybowski, Bartosz; Olvera de La Cruz, Monica
2012-02-01
We have developed an off-lattice kinetic Monte Carlo simulation scheme for reaction-diffusion problems in soft matter systems. The definition of transition probabilities in the Monte Carlo scheme are taken identical to the transition rates in a renormalized master equation of the diffusion process and match that of the Glauber dynamics of Ising model. Our scheme provides several advantages over the Brownian dynamics technique for non-equilibrium simulations. Since particle displacements are accepted/rejected in a Monte Carlo fashion as opposed to moving particles following a stochastic equation of motion, nonphysical movements (e.g., violation of a hard core assumption) are not possible (these moves have zero acceptance). Further, the absence of a stochastic ``noise'' term resolves the computational difficulties associated with generating statistically independent trajectories with definitive mean properties. Finally, since the timestep is independent of the magnitude of the interaction forces, much longer time-steps can be employed than Brownian dynamics. We discuss the applications of this scheme for dynamic self-assembly of photo-switchable nanoparticles and dynamical problems in polymeric systems.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
Multistage Monte Carlo simulation of jet modification in a static medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, S.; Park, C.; Barbieri, R. A.
In this work, the modification of hard jets in an extended static medium held at a fixed temperature is studied using three different Monte Carlo event generators: linear Boltzmann transport (LBT), modular all twist transverse-scattering elastic-drag and radiation (MATTER), and modular algorithm for relativistic treatment of heavy-ion interactions (MARTINI). Each event generator contains a different set of assumptions regarding the energy and virtuality of the partons within a jet versus the energy scale of the medium and, hence, applies to a different epoch in the space-time history of the jet evolution. Here modeling is developed where a jet may sequentiallymore » transition from one generator to the next, on a parton-by-parton level, providing a detailed simulation of the space-time evolution of medium modified jets over a much broader dynamic range than has been attempted previously in a single calculation. Comparisons are carried out for different observables sensitive to jet quenching, including the parton fragmentation function and the azimuthal distribution of jet energy around the jet axis. The effect of varying the boundary between different generators is studied and a theoretically motivated criterion for the location of this boundary is proposed. Lastly, the importance of such an approach with coupled generators to the modeling of jet quenching is discussed.« less
Multistage Monte Carlo simulation of jet modification in a static medium
Cao, S.; Park, C.; Barbieri, R. A.; ...
2017-08-22
In this work, the modification of hard jets in an extended static medium held at a fixed temperature is studied using three different Monte Carlo event generators: linear Boltzmann transport (LBT), modular all twist transverse-scattering elastic-drag and radiation (MATTER), and modular algorithm for relativistic treatment of heavy-ion interactions (MARTINI). Each event generator contains a different set of assumptions regarding the energy and virtuality of the partons within a jet versus the energy scale of the medium and, hence, applies to a different epoch in the space-time history of the jet evolution. Here modeling is developed where a jet may sequentiallymore » transition from one generator to the next, on a parton-by-parton level, providing a detailed simulation of the space-time evolution of medium modified jets over a much broader dynamic range than has been attempted previously in a single calculation. Comparisons are carried out for different observables sensitive to jet quenching, including the parton fragmentation function and the azimuthal distribution of jet energy around the jet axis. The effect of varying the boundary between different generators is studied and a theoretically motivated criterion for the location of this boundary is proposed. Lastly, the importance of such an approach with coupled generators to the modeling of jet quenching is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shima, T.; /Osaka U., Res. Ctr. Nucl. Phys.; Doe, P.J.
2008-01-01
The performance of the MOON detector for a next-generation neutrino-less double-beta decay experiment was evaluated by means of the Monte Carlo method. The MOON detector was found to be a feasible solution for the future experiment to search for the Majorana neutrino mass in the range of 100-30 meV.
MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Changho; Yang, Won Sik
This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport
NASA Astrophysics Data System (ADS)
Munk, Madicken
The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Investigation of Workplace-like Calibration Fields via a Deuterium-Tritium (D-T) Neutron Generator.
Mozhayev, Andrey V; Piper, Roman K; Rathbone, Bruce A; McDonald, Joseph C
2017-04-01
Radiation survey meters and personal dosimeters are typically calibrated in reference neutron fields based on conventional radionuclide sources, such as americium-beryllium (Am-Be) or californium-252 (Cf), either unmodified or heavy-water moderated. However, these calibration neutron fields differ significantly from the workplace fields in which most of these survey meters and dosimeters are being used. Although some detectors are designed to yield an approximately dose-equivalent response over a particular neutron energy range, the response of other detectors is highly dependent upon neutron energy. This, in turn, can result in significant over- or underestimation of the intensity of neutron radiation and/or personal dose equivalent determined in the work environment. The use of simulated workplace neutron calibration fields that more closely match those present at the workplace could improve the accuracy of worker, and workplace, neutron dose assessment. This work provides an overview of the neutron fields found around nuclear power reactors and interim spent fuel storage installations based on available data. The feasibility of producing workplace-like calibration fields in an existing calibration facility has been investigated via Monte Carlo simulations. Several moderating assembly configurations, paired with a neutron generator using the deuterium tritium (D-T) fusion reaction, were explored.
Kletenik-Edelman, Orly; Reichman, David R; Rabani, Eran
2011-01-28
A novel quantum mode coupling theory combined with a kinetic approach is developed for the description of collective density fluctuations in quantum liquids characterized by Boltzmann statistics. Three mode-coupling approximations are presented and applied to study the dynamic response of para-hydrogen near the triple point and normal liquid helium above the λ-transition. The theory is compared with experimental results and to the exact imaginary time data generated by path integral Monte Carlo simulations. While for liquid para-hydrogen the combination of kinetic and quantum mode-coupling theory provides semi-quantitative results for both short and long time dynamics, it fails for normal liquid helium. A discussion of this failure based on the ideal gas limit is presented.
NASA Astrophysics Data System (ADS)
Aronica, G. T.; Candela, A.
2007-12-01
SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.
A new method for photon transport in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sato, T.; Ogawa, K.
1999-12-01
Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.
NASA Astrophysics Data System (ADS)
Rosenberg, David E.
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the modeled issues and managers often seek near-optimal alternatives that address unmodeled objectives, preferences, limits, uncertainties, and other issues. Early on, Modeling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally different alternatives that addressed some unmodeled issues. This paper presents new stratified, Monte-Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and extent of the near-optimal region to an optimization problem. Interactive plot controls allow users to explore region features of most interest. Controls also streamline the process to elicit unmodeled issues and update the model formulation in response to elicited issues. Use for an example, single-objective, linear water quality management problem at Echo Reservoir, Utah, identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Flexibility is upheld by further interactive alternative generation, transforming the formulation into a multiobjective problem, and relaxing the tolerance parameter to expand the near-optimal region. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, and help elicit a larger set of unmodeled issues.
Monte Carlo performance studies for the site selection of the Cherenkov Telescope Array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, T.; Arrabito, L.; Bernlöhr, K.
The Cherenkov Telescope Array (CTA) represents the next generation of ground-based instruments for very-high-energy (VHE) gamma-ray astronomy, aimed at improving on the sensitivity of current-generation experiments by an order of magnitude and providing coverage over four decades of energy. The current CTA design consists of two arrays of tens of imaging atmospheric Cherenkov Telescopes, comprising Small, Medium and Large-Sized Telescopes, with one array located in each of the Northern and Southern Hemispheres. To study the effect of the site choice on the overall CTA performance and support the site evaluation process, detailed Monte Carlo simulations have been performed. These resultsmore » show the impact of different site-related attributes such as altitude, night-sky background and local geomagnetic field on CTA performance for the observation of VHE gamma rays.« less
Monte Carlo performance studies for the site selection of the Cherenkov Telescope Array
Hassan, T.; Arrabito, L.; Bernlöhr, K.; ...
2017-05-03
The Cherenkov Telescope Array (CTA) represents the next generation of ground-based instruments for very-high-energy (VHE) gamma-ray astronomy, aimed at improving on the sensitivity of current-generation experiments by an order of magnitude and providing coverage over four decades of energy. The current CTA design consists of two arrays of tens of imaging atmospheric Cherenkov Telescopes, comprising Small, Medium and Large-Sized Telescopes, with one array located in each of the Northern and Southern Hemispheres. To study the effect of the site choice on the overall CTA performance and support the site evaluation process, detailed Monte Carlo simulations have been performed. These resultsmore » show the impact of different site-related attributes such as altitude, night-sky background and local geomagnetic field on CTA performance for the observation of VHE gamma rays.« less
A Monte Carlo model for photoneutron generation by a medical LINAC
NASA Astrophysics Data System (ADS)
Sumini, M.; Isolan, L.; Cucchi, G.; Sghedoni, R.; Iori, M.
2017-11-01
For an optimal tuning of the radiation protection planning, a Monte Carlo model using the MCNPX code has been built, allowing an accurate estimate of the spectrometric and geometrical characteristics of photoneutrons generated by a Varian TrueBeam Stx© medical linear accelerator. We considered in our study a device working at the reference energy for clinical applications of 15 MV, stemmed from a Varian Clinac©2100 modeled starting from data collected thanks to several papers available in the literature. The model results were compared with neutron and photon dose measurements inside and outside the bunker hosting the accelerator obtaining a complete dose map. Normalized neutron fluences were tallied in different positions at the patient plane and at different depths. A sensitivity analysis with respect to the flattening filter material were performed to enlighten aspects that could influence the photoneutron production.
Parallelization of KENO-Va Monte Carlo code
NASA Astrophysics Data System (ADS)
Ramón, Javier; Peña, Jorge
1995-07-01
KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2011-05-10
We present first measurements of charged and neutral particle-flow correlations in pp collisions using the ATLAS calorimeters. Data were collected in 2009 and 2010 at centre-of-mass energies of 900 GeV and 7 TeV. Events were selected using a minimum-bias trigger which required a charged particle in scintillation counters on either side of the interaction point. Particle flows, sensitive to the underlying event, are measured using clusters of energy in the ATLAS calorimeters, taking advantage of their fine granularity. No Monte Carlo generator used in this analysis can accurately describe the measurements. The results are independent of those based on chargedmore » particles measured by the ATLAS tracking systems and can be used to constrain the parameters of Monte Carlo generators.« less
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Probabilistic generation of random networks taking into account information on motifs occurrence.
Bois, Frederic Y; Gayraud, Ghislaine
2015-01-01
Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli.
Probabilistic Generation of Random Networks Taking into Account Information on Motifs Occurrence
Bois, Frederic Y.
2015-01-01
Abstract Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli. PMID:25493547
Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code
NASA Astrophysics Data System (ADS)
Panettieri, Vanessa; Amor Duch, Maria; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat
2007-01-01
The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm2 and a thickness of 0.5 µm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water™ build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water™ cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.
Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code.
Panettieri, Vanessa; Duch, Maria Amor; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat
2007-01-07
The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm(2) and a thickness of 0.5 microm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.
Acharyya, Muktish
2017-07-01
The spin wave interference is studied in two dimensional Ising ferromagnet driven by two coherent spherical magnetic field waves by Monte Carlo simulation. The spin waves are found to propagate and interfere according to the classic rule of interference pattern generated by two point sources. The interference pattern of spin wave is observed in one boundary of the lattice. The interference pattern is detected and studied by spin flip statistics at high and low temperatures. The destructive interference is manifested as the large number of spin flips and vice versa.
Monte-Carlo Estimation of the Inflight Performance of the GEMS Satellite X-Ray Polarimeter
NASA Technical Reports Server (NTRS)
Kitaguchi, Takao; Tamagawa, Toru; Hayato, Asami; Enoto, Teruaki; Yoshikawa, Akifumi; Kaneko, Kenta; Takeuchi, Yoko; Black, Kevin; Hill, Joanne; Jahoda, Keith;
2014-01-01
We report a Monte-Carlo estimation of the in-orbit performance of a cosmic X-ray polarimeter designed to be installed on the focal plane of a small satellite. The simulation uses GEANT for the transport of photons and energetic particles and results from Magboltz for the transport of secondary electrons in the detector gas. We validated the simulation by comparing spectra and modulation curves with actual data taken with radioactive sources and an X-ray generator. We also estimated the in-orbit background induced by cosmic radiation in low Earth orbit.
Monte-Carlo estimation of the inflight performance of the GEMS satellite x-ray polarimeter
NASA Astrophysics Data System (ADS)
Kitaguchi, Takao; Tamagawa, Toru; Hayato, Asami; Enoto, Teruaki; Yoshikawa, Akifumi; Kaneko, Kenta; Takeuchi, Yoko; Black, Kevin; Hill, Joanne; Jahoda, Keith; Krizmanic, John; Sturner, Steven; Griffiths, Scott; Kaaret, Philip; Marlowe, Hannah
2014-07-01
We report a Monte-Carlo estimation of the in-orbit performance of a cosmic X-ray polarimeter designed to be installed on the focal plane of a small satellite. The simulation uses GEANT for the transport of photons and energetic particles and results from Magboltz for the transport of secondary electrons in the detector gas. We validated the simulation by comparing spectra and modulation curves with actual data taken with radioactive sources and an X-ray generator. We also estimated the in-orbit background induced by cosmic radiation in low Earth orbit.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
NASA Technical Reports Server (NTRS)
Petrachenko, Bill
2010-01-01
The first concrete actions toward a next generation system for geodetic VLBI began in 2003 when the IVS initiated Working Group 3 to investigate requirements for a new system. The working group set out ambitious performance goals and sketched out initial recommendations for the system. Starting in 2006, developments continued under the leadership of the VLBI2010 Committee (V2C) in two main areas: Monte Carlo simulators were developed to evaluate proposed system changes according to their impact on IVS final products, and a proof-of-concept effort sponsored by NASA was initiated to develop next generation systems and verify the concepts behind VLBI2010. In 2009, the V2C produced a progress report that summarized the conclusions of the Monte Carlo work and outlined recommendations for the next generation system in terms of systems, analysis, operations, and network configuration. At the time of writing: two complete VLBI2010 signal paths have been completed and data is being produced; a number of VLBI2010 antenna projects are under way; and a VLBI2010 Project Executive Group (V2PEG) has been initiated to provide strategic leadership.
Nicolucci, P; Schuch, F
2012-06-01
To use the Monte Carlo code PENELOPE to study attenuation and tissue equivalence properties of a-Al2O3:C for OSL dosimetry. Mass attenuation coefficients of α-Al2O3 and α-Al2O3:C with carbon percent weight concentrations from 1% to 150% were simulated with PENELOPE Monte Carlo code and compared to mass attenuation coefficients from soft tissue for photon beams ranging from 50kV to 10MV. Also, the attenuation of primary photon beams of 6MV and 10MV and the generation of secondary electrons by α-Al2O3 :C dosimeters positioned on the entrance surface of a water phantom were studied. A difference of up to 90% was found in the mass attenuation coefficient between the pure \\agr;-A12O3 and the material with 150% weight concentration of dopant at 1.5 keV, corresponding to the K-edge photoelectric absorption of aluminum. However for energies above 80 keV the concentration of carbon does not affect the mass attenuation coefficient and the material presents tissue equivalence for the beams studied. The ratio between the mass attenuation coefficients for \\agr-A12O3:C and for soft tissue are less than unit due to the higher density of the \\agr-A12O3 (2.12 g/cm s ) and its tissue equivalence diminishes to lower concentrations of carbon and for lower energies due to the relation of the radiation interaction effects with atomic number. The larger attenuation of the primary photon beams by the dosimeter was 16% at 250 keV and the maximum increase in secondary electrons fluence to the entrance surface of the phantom was found as 91% at 2MeV. The use of the OSL dosimeters in radiation therapy can be optimized by use of PENELOPE Monte Carlo simulation to provide a study of the attenuation and response characteristics of the material. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fallahpoor, M; Abbasi, M; Sen, A
Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-Tmore » scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning on a day to day basis.« less
NASA Astrophysics Data System (ADS)
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
A Descriptive Guide to Trade Space Analysis
2015-09-01
Development QFD Quality Function Deployment RSM Response Surface Method RSE Response Surface Equation SE Systems Engineering SME Subject Matter...surface equations ( RSEs ) as surrogate models. It uses the RSEs with Monte Carlo simulation to quantitatively explore changes across the surfaces to
Path-integral Monte Carlo method for Rényi entanglement entropies.
Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A
2014-07-01
We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.
Stochastic evaluation of second-order many-body perturbation energies.
Willow, Soohaeng Yoo; Kim, Kwang S; Hirata, So
2012-11-28
With the aid of the Laplace transform, the canonical expression of the second-order many-body perturbation correction to an electronic energy is converted into the sum of two 13-dimensional integrals, the 12-dimensional parts of which are evaluated by Monte Carlo integration. Weight functions are identified that are analytically normalizable, are finite and non-negative everywhere, and share the same singularities as the integrands. They thus generate appropriate distributions of four-electron walkers via the Metropolis algorithm, yielding correlation energies of small molecules within a few mE(h) of the correct values after 10(8) Monte Carlo steps. This algorithm does away with the integral transformation as the hotspot of the usual algorithms, has a far superior size dependence of cost, does not suffer from the sign problem of some quantum Monte Carlo methods, and potentially easily parallelizable and extensible to other more complex electron-correlation theories.
Electrosorption of a modified electrode in the vicinity of phase transition: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Gavilán Arriazu, E. M.; Pinto, O. A.
2018-03-01
We present a Monte Carlo study for the electrosorption of an electroactive species on a modified electrode. The surface of the electrode is modified by the irreversible adsorption of a non-electroactive species which is able to block a percentage of the adsorption sites. This generates an electrode with variable connectivity sites. A second species, electroactive in this case, is adsorbed in surface vacancies and can interact repulsively with itself. In particular, we are interested in the analysis of the effect of the non-electroactive species near of critical regime, where the c(2 × 2) structure is formed. Lattice-gas models and Monte Carlo simulations in the Gran Canonical Ensemble are used. The analysis conducted is based on the study of voltammograms, order parameters, isotherms, configurational entropy per site, at several values of energies and coverage degrees of the non-electroactive species.
Influence of ion chamber response on in-air profile measurements in megavoltage photon beams.
Tonkopi, E; McEwen, M R; Walters, B R B; Kawrakow, I
2005-09-01
This article presents an investigation of the influence of the ion chamber response, including buildup caps, on the measurement of in-air off-axis ratio (OAR) profiles in megavoltage photon beams using Monte Carlo simulations with the EGSnrc system. Two new techniques for the calculation of OAR profiles are presented. Results of the Monte Carlo simulations are compared to measurements performed in 6, 10 and 25 MV photon beams produced by an Elekta Precise linac and shown to agree within the experimental and simulation uncertainties. Comparisons with calculated in-air kerma profiles demonstrate that using a plastic mini phantom gives more accurate air-kerma measurements than using high-Z material buildup caps and that the variation of chamber response with distance from the central axis must be taken into account.
Electromagnetic scaling functions within the Green's function Monte Carlo approach
Rocco, N.; Alvarez-Ruso, L.; Lovato, A.; ...
2017-07-24
We have studied the scaling properties of the electromagnetic response functions of 4He and 12C nuclei computed by the Green's function Monte Carlo approach, retaining only the one-body current contribution. Longitudinal and transverse scaling functions have been obtained in the relativistic and nonrelativistic cases and compared to experiment for various kinematics. The characteristic asymmetric shape of the scaling function exhibited by data emerges in the calculations in spite of the nonrelativistic nature of the model. The results are mostly consistent with scaling of zeroth, first, and second kinds. Our analysis reveals a direct correspondence between the scaling and the nucleon-densitymore » response functions. In conclusion, the scaling function obtained from the proton-density response displays scaling of the first kind, even more evidently than the longitudinal and transverse scaling functions« less
Electromagnetic scaling functions within the Green's function Monte Carlo approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, N.; Alvarez-Ruso, L.; Lovato, A.
We have studied the scaling properties of the electromagnetic response functions of 4He and 12C nuclei computed by the Green's function Monte Carlo approach, retaining only the one-body current contribution. Longitudinal and transverse scaling functions have been obtained in the relativistic and nonrelativistic cases and compared to experiment for various kinematics. The characteristic asymmetric shape of the scaling function exhibited by data emerges in the calculations in spite of the nonrelativistic nature of the model. The results are mostly consistent with scaling of zeroth, first, and second kinds. Our analysis reveals a direct correspondence between the scaling and the nucleon-densitymore » response functions. In conclusion, the scaling function obtained from the proton-density response displays scaling of the first kind, even more evidently than the longitudinal and transverse scaling functions« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnes, P.; et al.
A Geant4-based Monte Carlo package named G4DS has been developed to simulate the response of DarkSide-50, an experiment operating since 2013 at LNGS, designed to detect WIMP interactions in liquid argon. In the process of WIMP searches, DarkSide-50 has achieved two fundamental milestones: the rejection of electron recoil background with a power of ~10^7, using the pulse shape discrimination technique, and the measurement of the residual 39Ar contamination in underground argon, ~3 orders of magnitude lower with respect to atmospheric argon. These results rely on the accurate simulation of the detector response to the liquid argon scintillation, its ionization, andmore » electron-ion recombination processes. This work provides a complete overview of the DarkSide Monte Carlo and of its performance, with a particular focus on PARIS, the custom-made liquid argon response model.« less
Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator
NASA Astrophysics Data System (ADS)
Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.
2012-09-01
This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.
DRoplet and hAdron generator for nuclear collisions: An update
NASA Astrophysics Data System (ADS)
Tomášik, Boris
2016-10-01
The Monte Carlo generator DRAGON simulates hadron production in ultrarelativistic nuclear collisions. The underlying theoretical description is provided by the blast-wave model. DRAGON includes second-order angular anisotropy in transverse shape and the amplitude of the transverse expansion velocity. It also allows to simulate hadron production from a fragmented fireball, e.g. as resulting from spinodal decomposition happening at the first-order phase transition.
CHARYBDIS: a black hole event generator
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Richardson, Peter; Webber, Bryan R.
2003-08-01
CHARYBDIS is an event generator which simulates the production and decay of miniature black holes at hadronic colliders as might be possible in certain extra dimension models. It interfaces via the Les Houches accord to general purpose Monte Carlo programs like HERWIG and PYTHIA which then perform the parton evolution and hadronization. The event generator includes the extra-dimensional `grey-body' effects as well as the change in the temperature of the black hole as the decay progresses. Various options for modelling the Planck-scale terminal decay are provided.
NASA Astrophysics Data System (ADS)
Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.
2004-04-01
A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.
NASA Astrophysics Data System (ADS)
Das, R. K.; Li, Z.; Perera, H.; Williamson, J. F.
1996-06-01
Practical dosimeters in brachytherapy, such as thermoluminescent dosimeters (TLD) and diodes, are usually calibrated against low-energy megavoltage beams. To measure absolute dose rate near a brachytherapy source, it is necessary to establish the energy response of the detector relative to that of the calibration energy. The purpose of this paper is to assess the accuracy of Monte Carlo photon transport (MCPT) simulation in modelling the absolute detector response as a function of detector geometry and photon energy. We have exposed two different sizes of TLD-100 (LiF chips) and p-type silicon diode detectors to calibrated
, HDR source
and superficial x-ray beams. For the Scanditronix electron-field diode, the relative detector response, defined as the measured detector readings per measured unit of air kerma, varied from
(40 kVp beam) to
(
beam). Similarly for the large and small chips the same quantity varied from
and
, respectively. Monte Carlo simulation was used to calculate the absorbed dose to the active volume of the detector per unit air kerma. If the Monte Carlo simulation is accurate, then the absolute detector response, which is defined as the measured detector reading per unit dose absorbed by the active detector volume, and is calculated by Monte Carlo simulation, should be a constant. For the diode, the absolute response is
. For TLDs of size
the absolute response is
and for TLDs of
it is
. From the above results we can conclude that the absolute response function of detectors (TLDs and diodes) is directly proportional to absorbed dose by the active volume of the detector and is independent of beam quality.
NASA Astrophysics Data System (ADS)
Horst, Felix; Fehrenbacher, Georg; Radon, Torsten; Kozlova, Ekaterina; Rosmej, Olga; Czarnecki, Damian; Schrenk, Oliver; Breckow, Joachim; Zink, Klemens
2015-05-01
This work presents a thermoluminescence dosimetry based method for the measurement of bremsstrahlung spectra in the energy range from 30 keV to 100 MeV, resolved in ten different energy intervals and for the photon ambient dosimetry in ultrashort pulsed radiation fields as e.g. generated during operation of the PHELIX laser at the GSI Helmholtzzentrum für Schwerionenforschung. The method is a routine-oriented development by application of a multi-filter technique. The data analysis takes around 1 h. The spectral information is obtained by the unfolding of the response of ten thermoluminescence dosimeters with absorbers of different materials and thicknesses arranged as a stack each with a different response function to photon radiation. These response functions were simulated by the use of the Monte Carlo code FLUKA. An algorithm was developed to unfold bremsstrahlung spectra from the readings of the ten dosimeters. The method has been validated by measurements at a clinical electron linear accelerator (6 MV and 18 MV bremsstrahlung). First measurements at the PHELIX laser system were carried out in December 2013 and January 2014. Spectra with photon energies up to 10 MeV and mean energies up to 420 keV were observed at laser-intensities around 1019 W /cm2 on a titanium foil target. The measurement results imply that the steel walls of the target chamber might be an additional bright x-ray source.
NASA Astrophysics Data System (ADS)
Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio
2016-09-01
A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.
Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Umari, Paolo
2006-03-01
We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
Angular dependence of the nanoDot OSL dosimeter.
Kerns, James R; Kry, Stephen F; Sahoo, Narayan; Followill, David S; Ibbott, Geoffrey S
2011-07-01
Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight/OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system. Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, as well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX. When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found. A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions.
Angular dependence of the nanoDot OSL dosimeter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerns, James R.; Kry, Stephen F.; Sahoo, Narayan
Purpose: Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight/OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system. Methods: Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, asmore » well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX. Results: When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found. Conclusions: A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions.« less
Angular dependence of the nanoDot OSL dosimeter
Kerns, James R.; Kry, Stephen F.; Sahoo, Narayan; Followill, David S.; Ibbott, Geoffrey S.
2011-01-01
Purpose: Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight∕OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system.Methods: Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, as well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX.Results: When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found.Conclusions: A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions. PMID:21858992
Bonner, W.J.; English, T.C.; Haas, R.H.; Feagan, T.R.; McKinley, R.A.
1987-01-01
The Bureau of Indian Affairs (BIA) is responsible for the natural resource management of approximately 52 million acres of Trust lands in the contiguous United States. The lands are distributed in a "patchwork" fashion throughout the country. Management responsibilities on these areas include: minerals, range, timber, fish and wildlife, agricultural, cultural, and archaeological resources. In an age of decreasing natural resources and increasing natural resource values, effective multiple resource management is critical. BIA has adopted a "systems approach" to natural resource management which utilizes Geographic Information System (GIS) technology. The GIS encompasses a continuum of spatial and relational data elements, and included functional capabilities such as: data collection, data entry, data base development, data analysis, data base management, display, and report generalization. In support of database development activities, BIA and BLM/TGS conducted a cooperative effort to investigate the potential of 1:100,000 scale Thematic Mapper (TM) False Color Composites (FCCs) for providing vegetation information suitable for input to the GIS and to later be incorporated as a generalized Bureau wide land cover map. Land cover information is critical as the majority of reservations currently have no land cover information in either map or digital form. This poster outlines an approach which includes the manual interpretation of land cover using TM FCCs, the digitizing of interpreted polygons, and the editing of digital data, used upon ground truthing exercises. An efficient and cost-effective methodology for generating large area land cover information is illustrated for the Mineral Strip area on the San Carlos Indian Reservation in Arizona. Techniques which capitalize on the knowledge of the local natural resources professionals, while minimizing machine processing requirements, are suggested.
Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT
Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.
2011-01-01
Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896
Monte Carlo, Probability, Algebra, and Pi.
ERIC Educational Resources Information Center
Hinders, Duane C.
1981-01-01
The uses of random number generators are illustrated in three ways: (1) the solution of a probability problem using a coin; (2) the solution of a system of simultaneous linear equations using a die; and (3) the approximation of pi using darts. (MP)
Stochastic Analysis of Orbital Lifetimes of Spacecraft
NASA Technical Reports Server (NTRS)
Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David
2008-01-01
A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.
Solar Feasibility Study May 2013 - San Carlos Apache Tribe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapp, Jim; Duncan, Ken; Albert, Steve
2013-05-01
The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas.more » Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.« less
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
A probabilistic seismic risk assessment procedure for nuclear power plants: (I) Methodology
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2011-01-01
A new procedure for probabilistic seismic risk assessment of nuclear power plants (NPPs) is proposed. This procedure modifies the current procedures using tools developed recently for performance-based earthquake engineering of buildings. The proposed procedure uses (a) response-based fragility curves to represent the capacity of structural and nonstructural components of NPPs, (b) nonlinear response-history analysis to characterize the demands on those components, and (c) Monte Carlo simulations to determine the damage state of the components. The use of response-rather than ground-motion-based fragility curves enables the curves to be independent of seismic hazard and closely related to component capacity. The use of Monte Carlo procedure enables the correlation in the responses of components to be directly included in the risk assessment. An example of the methodology is presented in a companion paper to demonstrate its use and provide the technical basis for aspects of the methodology. ?? 2011 Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Prettyman, T. H.; Gardner, R. P.; Verghese, K.
1993-08-01
A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Abhijit; Voter, Arthur
2009-01-01
We develop a variation of the temperature accelerated dynamics (TAD) method, called the p-TAD method, that efficiently generates an on-the-fly kinetic Monte Carlo (KMC) process catalog with control over the accuracy of the catalog. It is assumed that transition state theory is valid. The p-TAD method guarantees that processes relevant at the timescales of interest to the simulation are present in the catalog with a chosen confidence. A confidence measure associated with the process catalog is derived. The dynamics is then studied using the process catalog with the KMC method. Effective accuracy of a p-TAD calculation is derived when amore » KMC catalog is reused for conditions different from those the catalog was originally generated for. Different KMC catalog generation strategies that exploit the features of the p-TAD method and ensure higher accuracy and/or computational efficiency are presented. The accuracy and the computational requirements of the p-TAD method are assessed. Comparisons to the original TAD method are made. As an example, we study dynamics in sub-monolayer Ag/Cu(110) at the time scale of seconds using the p-TAD method. It is demonstrated that the p-TAD method overcomes several challenges plaguing the conventional KMC method.« less
Calibrating and training of neutron based NSA techniques with less SNM standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H; Swinhoe, Martyn T; Bracken, David S
2010-01-01
Accessing special nuclear material (SNM) standards for the calibration of and training on nondestructive assay (NDA) instruments has become increasingly difficult in light of enhanced safeguards and security regulations. Limited or nonexistent access to SNM has affected neutron based NDA techniques more than gamma ray techniques because the effects of multiplication require a range of masses to accurately measure the detector response. Neutron based NDA techniques can also be greatly affected by the matrix and impurity characteristics of the item. The safeguards community has been developing techniques for calibrating instrumentation and training personnel with dwindling numbers of SNM standards. Montemore » Carlo methods have become increasingly important for design and calibration of instrumentation. Monte Carlo techniques have the ability to accurately predict the detector response for passive techniques. The Monte Carlo results are usually benchmarked to neutron source measurements such as californium. For active techniques, the modeling becomes more difficult because of the interaction of the interrogation source with the detector and nuclear material; and the results cannot be simply benchmarked with neutron sources. A Monte Carlo calculated calibration curve for a training course in Indonesia of material test reactor (MTR) fuel elements assayed with an active well coincidence counter (AWCC) will be presented as an example. Performing training activities with reduced amounts of nuclear material makes it difficult to demonstrate how the multiplication and matrix properties of the item affects the detector response and limits the knowledge that can be obtained with hands-on training. A neutron pulse simulator (NPS) has been developed that can produce a pulse stream representative of a real pulse stream output from a detector measuring SNM. The NPS has been used by the International Atomic Energy Agency (IAEA) for detector testing and training applications at the Agency due to the lack of appropriate SNM standards. This paper will address the effect of reduced access to SNM for calibration and training of neutron NDA applications along with the advantages and disadvantages of some solutions that do not use standards, such as the Monte Carlo techniques and the NPS.« less
Statistical hadronization and microcanonical ensemble
Becattini, F.; Ferroni, L.
2004-01-01
We present a Monte Carlo calculation of the microcanonical ensemble of the of the ideal hadron-resonance gas including all known states up to a mass of 1. 8 GeV, taking into account quantum statistics. The computing method is a development of a previous one based on a Metropolis Monte Carlo algorithm, with a the grand-canonical limit of the multi-species multiplicity distribution as proposal matrix. The microcanonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy. This algorithm opens the way for event generators based for themore » statistical hadronization model.« less
NASA Astrophysics Data System (ADS)
Šantić, Branko; Gracin, Davor
2017-12-01
A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.
Use of speckle for determining the response characteristics of Doppler imaging radars
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1986-01-01
An optical model is developed for imaging optical radars such as the SAR on Seasat and the Shuttle Imaging Radar (SIR-B) by analyzing the Doppler shift of individual speckles in the image. The signal received at the spacecraft is treated in terms of a Fresnel-Kirchhoff integration over all backscattered radiation within a Huygen aperture at the earth. Account is taken of the movement of the spacecraft along the orbital path between emission and reception. The individual points are described by integration of the point source amplitude with a Green's function scattering kernel. Doppler data at each point furnishes the coordinates for visual representations. A Rayleigh-Poisson model of the surface scattering characteristics is used with Monte Carlo methods to generate simulations of Doppler radar speckle that compare well with Seasat SAR data SIR-B data.
NASA Astrophysics Data System (ADS)
Esfandi, F.; Saramad, S.
2015-07-01
In this work, a new generation of scintillator based X-ray imagers based on ZnO nanowires in Anodized Aluminum Oxide (AAO) nanoporous template is characterized. The optical response of ordered ZnO nanowire arrays in porous AAO template under low energy X-ray illumination is simulated by the Geant4 Monte Carlo code and compared with experimental results. The results show that for 10 keV X-ray photons, by considering the light guiding properties of zinc oxide inside the AAO template and suitable selection of detector thickness and pore diameter, the spatial resolution less than one micrometer and the detector detection efficiency of 66% are accessible. This novel nano scintillator detector can have many advantages for medical applications in the future.
Modeling surface backgrounds from radon progeny plate-out
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumpilly, G.; Guiseppe, V. E.; Snyder, N.
2013-08-08
The next generation low-background detectors operating deep underground aim for unprecedented low levels of radioactive backgrounds. The surface deposition and subsequent implantation of radon progeny in detector materials will be a source of energetic background events. We investigate Monte Carlo and model-based simulations to understand the surface implantation profile of radon progeny. Depending on the material and region of interest of a rare event search, these partial energy depositions can be problematic. Motivated by the use of Ge crystals for the detection of neutrinoless double-beta decay, we wish to understand the detector response of surface backgrounds from radon progeny. Wemore » look at the simulation of surface decays using a validated implantation distribution based on nuclear recoils and a realistic surface texture. Results of the simulations and measured α spectra are presented.« less
Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.
Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P
2018-01-04
Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.
Boutoux, G; Batani, D; Burgy, F; Ducret, J-E; Forestier-Colleoni, P; Hulin, S; Rabhi, N; Duval, A; Lecherbourg, L; Reverdin, C; Jakubowska, K; Szabo, C I; Bastiani-Ceccotti, S; Consoli, F; Curcio, A; De Angelis, R; Ingenito, F; Baggio, J; Raffestin, D
2016-04-01
Thanks to their high dynamic range and ability to withstand electromagnetic pulse, imaging plates (IPs) are commonly used as passive detectors in laser-plasma experiments. In the framework of the development of the diagnostics for the Petawatt Aquitaine Laser facility, we present an absolute calibration and spatial resolution study of five different available types of IP (namely, MS-SR-TR-MP-ND) performed by using laser-induced K-shell X-rays emitted by a solid silver target irradiated by the laser ECLIPSE at CEntre Lasers Intenses et Applications. In addition, IP sensitivity measurements were performed with a 160 kV X-ray generator at CEA DAM DIF, where the absolute response of IP SR and TR has been calibrated to X-rays in the energy range 8-75 keV with uncertainties of about 15%. Finally, the response functions have been modeled in Monte Carlo GEANT4 simulations in order to reproduce experimental data. Simulations enable extrapolation of the IP response functions to photon energies from 1 keV to 1 GeV, of interest, e.g., for laser-driven radiography.
NASA Astrophysics Data System (ADS)
Boutoux, G.; Batani, D.; Burgy, F.; Ducret, J.-E.; Forestier-Colleoni, P.; Hulin, S.; Rabhi, N.; Duval, A.; Lecherbourg, L.; Reverdin, C.; Jakubowska, K.; Szabo, C. I.; Bastiani-Ceccotti, S.; Consoli, F.; Curcio, A.; De Angelis, R.; Ingenito, F.; Baggio, J.; Raffestin, D.
2016-04-01
Thanks to their high dynamic range and ability to withstand electromagnetic pulse, imaging plates (IPs) are commonly used as passive detectors in laser-plasma experiments. In the framework of the development of the diagnostics for the Petawatt Aquitaine Laser facility, we present an absolute calibration and spatial resolution study of five different available types of IP (namely, MS-SR-TR-MP-ND) performed by using laser-induced K-shell X-rays emitted by a solid silver target irradiated by the laser ECLIPSE at CEntre Lasers Intenses et Applications. In addition, IP sensitivity measurements were performed with a 160 kV X-ray generator at CEA DAM DIF, where the absolute response of IP SR and TR has been calibrated to X-rays in the energy range 8-75 keV with uncertainties of about 15%. Finally, the response functions have been modeled in Monte Carlo GEANT4 simulations in order to reproduce experimental data. Simulations enable extrapolation of the IP response functions to photon energies from 1 keV to 1 GeV, of interest, e.g., for laser-driven radiography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boutoux, G., E-mail: boutoux@celia.u-bordeaux1.fr; Batani, D.; Burgy, F.
2016-04-15
Thanks to their high dynamic range and ability to withstand electromagnetic pulse, imaging plates (IPs) are commonly used as passive detectors in laser-plasma experiments. In the framework of the development of the diagnostics for the Petawatt Aquitaine Laser facility, we present an absolute calibration and spatial resolution study of five different available types of IP (namely, MS-SR-TR-MP-ND) performed by using laser-induced K-shell X-rays emitted by a solid silver target irradiated by the laser ECLIPSE at CEntre Lasers Intenses et Applications. In addition, IP sensitivity measurements were performed with a 160 kV X-ray generator at CEA DAM DIF, where the absolutemore » response of IP SR and TR has been calibrated to X-rays in the energy range 8-75 keV with uncertainties of about 15%. Finally, the response functions have been modeled in Monte Carlo GEANT4 simulations in order to reproduce experimental data. Simulations enable extrapolation of the IP response functions to photon energies from 1 keV to 1 GeV, of interest, e.g., for laser-driven radiography.« less
García-Pareja, S; Galán, P; Manzano, F; Brualla, L; Lallena, A M
2010-07-01
In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within approximately 3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Nonlinear Monte Carlo model of superdiffusive shock acceleration with magnetic field amplification
NASA Astrophysics Data System (ADS)
Bykov, Andrei M.; Ellison, Donald C.; Osipov, Sergei M.
2017-03-01
Fast collisionless shocks in cosmic plasmas convert their kinetic energy flow into the hot downstream thermal plasma with a substantial fraction of energy going into a broad spectrum of superthermal charged particles and magnetic fluctuations. The superthermal particles can penetrate into the shock upstream region producing an extended shock precursor. The cold upstream plasma flow is decelerated by the force provided by the superthermal particle pressure gradient. In high Mach number collisionless shocks, efficient particle acceleration is likely coupled with turbulent magnetic field amplification (MFA) generated by the anisotropic distribution of accelerated particles. This anisotropy is determined by fast particle transport, making the problem strongly nonlinear and multiscale. Here, we present a nonlinear Monte Carlo model of collisionless shock structure with superdiffusive propagation of high-energy Fermi accelerated particles coupled to particle acceleration and MFA, which affords a consistent description of strong shocks. A distinctive feature of the Monte Carlo technique is that it includes the full angular anisotropy of the particle distribution at all precursor positions. The model reveals that the superdiffusive transport of energetic particles (i.e., Lévy-walk propagation) generates a strong quadruple anisotropy in the precursor particle distribution. The resultant pressure anisotropy of the high-energy particles produces a nonresonant mirror-type instability that amplifies compressible wave modes with wavelengths longer than the gyroradii of the highest-energy protons produced by the shock.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quaglioni, S.; Beck, B. R.
The Monte Carlo All Particle Method generator and collision physics library features two models for allowing a particle to either up- or down-scatter due to collisions with material at finite temperature. The two models are presented and compared. Neutron interaction with matter through elastic collisions is used as testing case.
Eliminating the rugosity effect from compensated density logs by geometrical response matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flaum, C.; Holenka, J.M.; Case, C.R.
1991-06-01
A theoretical and experimental effort to understand the effects of borehole rugosity on individual detector responses yielded an improved method of processing compensated density logs. Historically, the spine/ribs technique for obtaining borehole and mudcake compensation of dual-detector, gamma-gamma density logs has been very successful as long as the borehole and other environmental effects vary slowly with depth and the interest in limited to vertical features broader than several feet. With the increased interest in higher vertical resolution, a more detailed analysis of the effect of such quickly varying environmental effects as rugosity was required. A laboratory setup simulating the effectmore » of rugosity on Schlumberger Litho-Density{sup SM} tools (LDT) was used to study vertical response in the presence of rugosity. The data served as the benchmark for the Nonte Carlo models used to generate synthetic density logs in the presence of more complex rugosity patterns. The results provided in this paper show that proper matching of the two detector responses before application of conventional compensation methods can eliminate rugosity effects without degrading the measurements vertical resolution. The accuracy of the results is a good as the obtained in a parallel mudcake or standoff with the conventional method. Application to both field and synthetic log confirmed the validity of these results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haugen, Carl C.; Forget, Benoit; Smith, Kord S.
Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less
Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Aben, R; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Agricola, J; Aguilar-Saavedra, J A; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Alkire, S P; Allbrooke, B M M; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Álvarez Piqueras, D; Alviggi, M G; Amadio, B T; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anders, J K; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Artz, S; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Augsten, K; Aurousseau, M; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Baca, M J; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Baldin, E M; Balek, P; Balestri, T; Balli, F; Balunas, W K; Banas, E; Banerjee, Sw; Bannoura, A A E; Barak, L; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Basalaev, A; Bassalat, A; Basye, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, M; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bevan, A J; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Biedermann, D; Biesuz, N V; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biondi, S; Bjergaard, D M; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blanco, J E; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Blunier, S; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Bogaerts, J A; Bogavac, D; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutle, S K; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Breaden Madden, W D; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Bruschi, M; Bruscino, N; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bullock, D; Burckhart, H; Burdin, S; Burgard, C D; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, A R; Cabrera Urbán, S; Caforio, D; Cairo, V M; Cakir, O; Calace, N; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Camarri, P; Cameron, D; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Carbone, R M; Cardarelli, R; Cardillo, F; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Casper, D W; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerda Alberich, L; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, Y L; Chang, P; Chapman, J D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cheremushkina, E; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiarelli, G; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choi, K; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocio, A; Cirotto, F; Citron, Z H; Ciubancan, M; Clark, A; Clark, B L; Clark, P J; Clarke, R N; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Colasurdo, L; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Compostella, G; Conde Muiño, P; Coniavitis, E; Connell, S H; Connelly, I A; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cúth, J; Cuthbert, C; Czirr, H; Czodrowski, P; D'Auria, S; D'Onofrio, M; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Dang, N P; Daniells, A C; Danninger, M; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Benedetti, A; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Dette, K; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Diglio, S; Dimitrievska, A; Dingfelder, J; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Du, Y; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Duschinger, D; Dutta, B; Dyndal, M; Eckardt, C; Ecker, K M; Edgar, R C; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Faucci Giannelli, M; Favareto, A; Fayard, L; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Feremenga, L; Fernandez Martinez, P; Fernandez Perez, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Flaschel, N; Fleck, I; Fleischmann, P; Fletcher, G T; Fletcher, G; Fletcher, R R M; Flick, T; Floderus, A; Flores Castillo, L R; Flowerdew, M J; Formica, A; Forti, A; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Frate, M; Fraternali, M; Freeborn, D; French, S T; Fressard-Batraneanu, S M; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fusayasu, T; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gach, G P; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudiello, A; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geich-Gimbel, Ch; Geisler, M P; Gemme, C; Genest, M H; Geng, C; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghasemi, S; Ghazlane, H; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gibbard, B; Gibson, S M; Gignac, M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Goujdami, D; Goussiou, A G; Govender, N; Gozani, E; Graber, L; Grabowska-Bold, I; Gradin, P O J; Grafström, P; Gramling, J; Gramstad, E; Grancagnolo, S; Gratchev, V; Gray, H M; Graziani, E; Greenwood, Z D; Grefe, C; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Groh, S; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Guo, Y; Gupta, S; Gustavino, G; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Haley, J; Hall, D; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamilton, A; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Haney, B; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Hellman, S; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henkelmann, S; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Herbert, G H; Hernández Jiménez, Y; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hinman, R R; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohlfeld, M; Hohn, D; Holmes, T R; Homann, M; Hong, T M; Hooberman, B H; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, Q; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Idrissi, Z; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Ince, T; Introzzi, G; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobi, K B; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansky, R; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jiggins, S; Jimenez Pena, J; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, P; Johns, K A; Johnson, W J; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kajomovitz, E; Kalderon, C W; Kaluza, A; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kaplan, L S; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karentzos, E; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kasahara, K; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Kato, C; Katre, A; Katzy, J; Kawade, K; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharlamov, A G; Khoo, T J; Khovanskiy, V; Khramov, E; Khubua, J; Kido, S; Kim, H Y; Kim, S H; Kim, Y K; Kimura, N; Kind, O M; King, B T; King, M; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Kluge, E-E; Kluit, P; Kluth, S; Knapik, J; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Kolb, M; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuger, F; Kuhl, A; Kuhl, T; Kukhtin, V; Kukla, R; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kurashige, H; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; La Rosa, A; La Rosa Navarro, J L; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Lazovich, T; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, X; Li, Y; Liang, Z; Liao, H; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, H; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loew, K M; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Looper, K A; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, H; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luedtke, C; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Macdonald, C M; Maček, B; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeda, J; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mantoani, M; Mapelli, L; March, L; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marley, D E; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, M; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazza, S M; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer Zu Theenhausen, H; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mistry, K P; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Monden, R; Mondragon, M C; Mönig, K; Monini, C; Monk, J; Monnier, E; Montalbano, A; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Mori, D; Mori, T; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Mortensen, S S; Morton, A; Morvaj, L; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Mullier, G A; Munoz Sanchez, F J; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nachman, B P; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Naranjo Garcia, R F; Narayan, R; Narrias Villar, D I; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Ochoa-Ricoux, J P; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olivares Pino, S A; Oliveira Damazio, D; Olszewski, A; Olszowska, J; Onofre, A; Onogi, K; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Pan, Y B; Panagiotopoulou, E St; Pandini, C E; Panduro Vazquez, J G; Pani, P; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Penc, O; Peng, C; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petroff, P; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pin, A W J; Pina, J; Pinamonti, M; Pinfold, J L; Pingel, A; Pires, S; Pirumov, H; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Poley, A; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pozo Astigarraga, M E; Pralavorio, P; Pranko, A; Prasad, S; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Rauscher, F; Rave, S; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reichert, J; Reisin, H; Rembser, C; Ren, H; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rifki, O; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Roe, S; Røhne, O; Romaniouk, A; Romano, M; Romano Saez, S M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, J H N; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Ryzhov, A; Saavedra, A F; Sabato, G; Sacerdoti, S; Saddique, A; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Saha, P; Sahinsoy, M; Saimpert, M; Saito, T; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Salazar Loyola, J E; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sammel, D; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, C; Sandstroem, R; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sasaki, Y; Sato, K; Sauvage, G; Sauvan, E; Savage, G; Savard, P; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schillo, C; Schioppa, M; Schlenker, S; Schmieden, K; Schmitt, C; Schmitt, S; Schmitt, S; Schmitz, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schweiger, H; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Scifo, E; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekhon, K; Sekula, S J; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Sessa, M; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shaw, S M; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shoaleh Saadi, D; Shochet, M J; Shojaii, S; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidebo, P E; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simon, M; Sinervo, P; Sinev, N B; Sioli, M; Siragusa, G; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, M N K; Smith, R W; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Sokhrannyi, G; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosa, D; Sosebee, M; Sotiropoulou, C L; Soualah, R; Soukharev, A M; South, D; Sowden, B C; Spagnolo, S; Spalla, M; Spangenberg, M; Spanò, F; Spearman, W R; Sperlich, D; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; St Denis, R D; Stabile, A; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Suchek, S; Sugaya, Y; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, S; Svatos, M; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tannenwald, B B; Tapia Araya, S; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, A C; Taylor, F E; Taylor, G N; Taylor, P T E; Taylor, W; Teischinger, F A; Teixeira-Dias, P; Temming, K K; Temple, D; Ten Kate, H; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thun, R P; Tibbetts, M J; Ticse Torres, R E; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todome, K; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; Truong, L; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsui, K M; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Turvey, A J; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Ueda, I; Ueno, R; Ughetto, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloce, L M; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vivarelli, I; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; Wharton, A M; White, A; White, M J; White, R; White, S; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, D; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yao, W-M; Yap, Y C; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yuen, S P Y; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zeng, J C; Zeng, Q; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, G; Zhang, H; Zhang, J; Zhang, L; Zhang, R; Zhang, X; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, M; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zwalinski, L
2016-01-01
Distributions of transverse momentum [Formula: see text] and the related angular variable [Formula: see text] of Drell–Yan lepton pairs are measured in 20.3 fb[Formula: see text] of proton–proton collisions at [Formula: see text] TeV with the ATLAS detector at the LHC. Measurements in electron-pair and muon-pair final states are corrected for detector effects and combined. Compared to previous measurements in proton–proton collisions at [Formula: see text] TeV, these new measurements benefit from a larger data sample and improved control of systematic uncertainties. Measurements are performed in bins of lepton-pair mass above, around and below the Z -boson mass peak. The data are compared to predictions from perturbative and resummed QCD calculations. For values of [Formula: see text] the predictions from the Monte Carlo generator ResBos are generally consistent with the data within the theoretical uncertainties. However, at larger values of [Formula: see text] this is not the case. Monte Carlo generators based on the parton-shower approach are unable to describe the data over the full range of [Formula: see text] while the fixed-order prediction of Dynnlo falls below the data at high values of [Formula: see text]. ResBos and the parton-shower Monte Carlo generators provide a much better description of the evolution of the [Formula: see text] and [Formula: see text] distributions as a function of lepton-pair mass and rapidity than the basic shape of the data.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2016-05-23
Distributions of transverse momentum p T ℓℓ and the related angular variablemore » $$\\phi ^*_{\\eta }$$ of Drell-Yan lepton pairs are measured in 20.3 fb –1 of proton-proton collisions at √s=8 TeV with the ATLAS detector at the LHC. Measurements in electron-pair and muon-pair final states are corrected for detector effects and combined. Compared to previous measurements in protonΓÇôproton collisions at √s=7 TeV these new measurements benefit from a larger data sample and improved control of systematic uncertainties. Measurements are performed in bins of lepton-pair mass above, around and below the Z -boson mass peak. The data are compared to predictions from perturbative and resummed QCD calculations. For values of $$\\phi ^*_{\\eta }$$<1 the predictions from the Monte Carlo generator ResBos are generally consistent with the data within the theoretical uncertainties. However, at larger values of $$\\phi ^*_{\\eta }$$ this is not the case. Monte Carlo generators based on the parton-shower approach are unable to describe the data over the full range of p T ℓℓ while the fixed-order prediction of Dynnlo falls below the data at high values of p T ℓℓ. Here, ResBos and the parton-shower Monte Carlo generators provide a much better description of the evolution of the $$\\phi ^*_{\\eta }$$ and p T ℓℓ distributions as a function of lepton-pair mass and rapidity than the basic shape of the data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
2016-05-23
Distributions of transverse momentum pmore » $$ℓℓ\\atop{T}$$ and the related angular variable Φ$$*\\atop{η}$$ of DrellΓÇôYan lepton pairs are measured in 20.3$$\\perp$$áfb -1 of protonΓÇôproton collisions at √s=8$$\\perp$$áTeV with the ATLAS detector at the LHC. Measurements in electron-pair and muon-pair final states are corrected for detector effects and combined. Compared to previous measurements in protonΓÇôproton collisions at √s=7$$\\perp$$áTeV, these new measurements benefit from a larger data sample and improved control of systematic uncertainties. Measurements are performed in bins of lepton-pair mass above, around and below the Z-boson mass peak. The data are compared to predictions from perturbative and resummed QCD calculations. For values of Φ$$*\\atop{η}$$<1 the predictions from the Monte Carlo generator ResBos are generally consistent with the data within the theoretical uncertainties. However, at larger values of Φ$$*\\atop{η}$$ this is not the case. Monte Carlo generators based on the parton-shower approach are unable to describe the data over the full range of pℓℓTpTℓℓ while the fixed-order prediction of Dynnlo falls below the data at high values of p$$ℓℓ\\atop{T}$$ . ResBos and the parton-shower Monte Carlo generators provide a much better description of the evolution of the Φ$$*\\atop{η}$$ and p$$ℓℓ\\atop{T}$$ distributions as a function of lepton-pair mass and rapidity than the basic shape of the data.« less
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Pseudo-random properties of a linear congruential generator investigated by b-adic diaphony
NASA Astrophysics Data System (ADS)
Stoev, Peter; Stoilova, Stanislava
2017-12-01
In the proposed paper we continue the study of the diaphony, defined in b-adic number system, and we extend it in different directions. We investigate this diaphony as a tool for estimation of the pseudorandom properties of some of the most used random number generators. This is done by evaluating the distribution of specially constructed two-dimensional nets on the base of the obtained random numbers. The aim is to see how the generated numbers are suitable for calculations in some numerical methods (Monte Carlo etc.).
Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling
ERIC Educational Resources Information Center
Babcock, Ben
2011-01-01
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Reliability of Test Scores in Nonparametric Item Response Theory.
ERIC Educational Resources Information Center
Sijtsma, Klaas; Molenaar, Ivo W.
1987-01-01
Three methods for estimating reliability are studied within the context of nonparametric item response theory. Two were proposed originally by Mokken and a third is developed in this paper. Using a Monte Carlo strategy, these three estimation methods are compared with four "classical" lower bounds to reliability. (Author/JAZ)
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad
The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less
Monte Carlo modeling of ion chamber performance using MCNP.
Wallace, J D
2012-12-01
Ion Chambers have a generally flat energy response with some deviations at very low (<100 keV) and very high (>2 MeV) energies. Some improvements in the low energy response can be achieved through use of high atomic number gases, such as argon and xenon, and higher chamber pressures. This work looks at the energy response of high pressure xenon-filled ion chambers using the MCNP Monte Carlo package to develop geometric models of a commercially available high pressure ion chamber (HPIC). The use of the F6 tally as an estimator of the energy deposited in a region of interest per unit mass, and the underlying assumptions associated with its use are described. The effect of gas composition, chamber gas pressure, chamber wall thickness, and chamber holder wall thicknesses on energy response are investigated and reported. The predicted energy response curve for the HPIC was found to be similar to that reported by other investigators. These investigations indicate that improvements to flatten the overall energy response of the HPIC down to 70 keV could be achieved through use of 3 mm-thick stainless steel walls for the ion chamber.
Particle Methods for Simulating Atomic Radiation in Hypersonic Reentry Flows
NASA Astrophysics Data System (ADS)
Ozawa, T.; Wang, A.; Levin, D. A.; Modest, M.
2008-12-01
With a fast reentry speed, the Stardust vehicle generates a strong shock region ahead of its blunt body with a temperature above 60,000 K. These extreme Mach number flows are sufficiently energetic to initiate gas ionization processes and thermal and chemical ablation processes. The nonequilibrium gaseous radiation from the shock layer is so strong that it affects the flowfield macroparameter distributions. In this work, we present the first loosely coupled direct simulation Monte Carlo (DSMC) simulations with the particle-based photon Monte Carlo (p-PMC) method to simulate high-Mach number reentry flows in the near-continuum flow regime. To efficiently capture the highly nonequilibrium effects, emission and absorption cross section databases using the Nonequilibrium Air Radiation (NEQAIR) were generated, and atomic nitrogen and oxygen radiative transport was calculated by the p-PMC method. The radiation energy change calculated by the p-PMC method has been coupled in the DSMC calculations, and the atomic radiation was found to modify the flow field and heat flux at the wall.
NASA Astrophysics Data System (ADS)
Li, Yun; Jiang, Hai; Lun, Zhiyuan; Wang, Yijiao; Huang, Peng; Hao, Hao; Du, Gang; Zhang, Xing; Liu, Xiaoyan
2016-04-01
Degradation behaviors in the high-k/metal gate stacks of nMOSFETs are investigated by three-dimensional (3D) kinetic Monte-Carlo (KMC) simulation with multiple trap coupling. Novel microscopic mechanisms are simultaneously considered in a compound system: (1) trapping/detrapping from/to substrate/gate; (2) trapping/detrapping to other traps; (3) trap generation and recombination. Interacting traps can contribute to random telegraph noise (RTN), bias temperature instability (BTI), and trap-assisted tunneling (TAT). Simulation results show that trap interaction induces higher probability and greater complexity in trapping/detrapping processes and greatly affects the characteristics of RTN and BTI. Different types of trap distribution cause largely different behaviors of RTN, BTI, and TAT. TAT currents caused by multiple trap coupling are sensitive to the gate voltage. Moreover, trap generation and recombination have great effects on the degradation of HfO2-based nMOSFETs under a large stress.
NASA Astrophysics Data System (ADS)
Mitra, S.
2013-04-01
The associated-particle technique (APT) will be presented for some diverse applications that include on the one hand, analyzing the body composition of live sheep and on the other, identifying the fillers of unexploded ordnance (UXO). What began with proof-of-concept studies using a large laboratory based 14 MeV neutron generator of the "associated-particle" type, soon became possible for the first time to measure total body protein, fat and water simultaneously in live sheep using a compact field deployable associated-particle sealed-tube neutron generator (APSTNG). This non-invasive technique offered the animal physiologist a tool to monitor the growth of an animal in response to new genetic, nutritional and pharmacologic methods for livestock improvement. While measurement of carbon (C), nitrogen (N) and oxygen (O) determined protein, fat and water because of the fixed stoichiometric proportions of these elements in these body components, the unique C/N and C/O ratios of high explosives revealed their identity in UXO. The algorithm that was developed and implemented to extract C, N and O counts from an APT generated gamma-ray spectrum will be presented together with the UXO investigations that involved preliminary proofof-concept studies and modeling with Monte Carlo produced synthetic spectra of 57-155 mm projectiles.
Blended near-optimal tools for flexible water resources decision making
NASA Astrophysics Data System (ADS)
Rosenberg, David
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the static modelled issues and managers often seek near-optimal alternatives that address un-modelled or changing objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally-different alternatives that addressed select un-modelled issues. This paper presents new stratified, Monte Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and full extent of the near-optimal region to an optimization problem. Plot controls allow users to interactively explore region features of most interest. Controls also streamline the process to elicit un-modelled issues and update the model formulation in response to elicited issues. Use for a single-objective water quality management problem at Echo Reservoir, Utah identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, help elicit a larger set of un-modelled issues, and offer managers greater flexibility to cope in a changing world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Qingpeng; Dinan, James; Tirukkovalur, Sravya
2016-01-28
Quantum Monte Carlo (QMC) applications perform simulation with respect to an initial state of the quantum mechanical system, which is often captured by using a cubic B-spline basis. This representation is stored as a read-only table of coefficients and accesses to the table are generated at random as part of the Monte Carlo simulation. Current QMC applications, such as QWalk and QMCPACK, replicate this table at every process or node, which limits scalability because increasing the number of processors does not enable larger systems to be run. We present a partitioned global address space approach to transparently managing this datamore » using Global Arrays in a manner that allows the memory of multiple nodes to be aggregated. We develop an automated data management system that significantly reduces communication overheads, enabling new capabilities for QMC codes. Experimental results with QWalk and QMCPACK demonstrate the effectiveness of the data management system.« less
Kawrakow, I
2000-03-01
In this report the condensed history Monte Carlo simulation of electron transport and its application to the calculation of ion chamber response is discussed. It is shown that the strong step-size dependencies and lack of convergence to the correct answer previously observed are the combined effect of the following artifacts caused by the EGS4/PRESTA implementation of the condensed history technique: dose underprediction due to PRESTA'S pathlength correction and lateral correlation algorithm; dose overprediction due to the boundary crossing algorithm; dose overprediction due to the breakdown of the fictitious cross section method for sampling distances between discrete interaction and the inaccurate evaluation of energy-dependent quantities. These artifacts are now understood quantitatively and analytical expressions for their effect are given.
Monte Carlo track structure for radiation biology and space applications
NASA Technical Reports Server (NTRS)
Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.
2001-01-01
Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.
Variability and Reliabiltiy in Axon Growth Cone Navigation Decision Making
NASA Astrophysics Data System (ADS)
Garnelo, Marta; Ricoult, Sébastien G.; Juncker, David; Kennedy, Timothy E.; Faisal, Aldo A.
2015-03-01
The nervous system's wiring is a result of axon growth cones navigating through specific molecular environments during development. In order to reach their target, growth cones need to make decisions under uncertainty as they are faced with stochastic sensory information and probabilistic movements. The overall system therefore exhibits features of whole organisms (perception, decision making, action) in the subset of a single cell. We aim to characterise growth cone navigation in defined nano-dot guidance cue environments, by using the tools of computational neuroscience to conduct ``molecular psychophysics.'' We start with a generative model of growth cone behaviour and we 1. characterise sensory and internal sources of noise contributing to behavioural variables, by combining knowledge of the underlying stochastic dynamics in cue sensing and the growth of the cytoskeleton. This enables us to 2. produce bottom-up lower limit estimates of behavioural response reliability and visualise it as probability distributions over axon growth trajectories. Given this information we can match our in silico model's ``psychometric'' decision curves with empirical data. Finally we use a Monte-Carlo approach to predict response distributions of axon trajectories from our model.
Measurement and validation of benchmark-quality thick-target tungsten X-ray spectra below 150 kVp.
Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M
2000-11-01
Pulse-height distributions of two constant potential X-ray tubes with fixed anode tungsten targets were measured and unfolded. The measurements employed quantitative alignment of the beam, the use of two different semiconductor detectors (high-purity germanium and cadmium-zinc-telluride), two different ion chamber systems with beam-specific calibration factors, and various filter and tube potential combinations. Monte Carlo response matrices were generated for each detector for unfolding the pulse-height distributions into spectra incident on the detectors. These response matrices were validated for the low error bars assigned to the data. A significant aspect of the validation of spectra, and a detailed characterization of the X-ray tubes, involved measuring filtered and unfiltered beams at multiple tube potentials (30-150 kVp). Full corrections to ion chamber readings were employed to convert normalized fluence spectra into absolute fluence spectra. The characterization of fixed anode pitting and its dominance over exit window plating and/or detector dead layer was determined. An Appendix of tabulated benchmark spectra with assigned error ranges was developed for future reference.
Monte Carlo simulations support non-Cerenkov radioluminescence production in tissue
NASA Astrophysics Data System (ADS)
Ackerman, Nicole L.; Boschi, Federico; Spinelli, Antonello E.
2017-08-01
There is experimental evidence for the production of non-Cerenkov radioluminescence in a variety of materials, including tissue. We constructed a Geant4 Monte Carlo simulation of the radiation from P32 and Tc99m interacting in chicken breast and used experimental imaging data to model a scintillation-like emission. The same radioluminescence spectrum is visible from both isotopes and cannot otherwise be explained through fluorescence or filter miscalibration. We conclude that chicken breast has a near-infrared scintillation-like response with a light yield three orders of magnitude smaller than BGO.
Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M.; Rearden, Bradley T.
This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.
Constrained proper sampling of conformations of transition state ensemble of protein folding
Lin, Ming; Zhang, Jian; Lu, Hsiao-Mei; Chen, Rong; Liang, Jie
2011-01-01
Characterizing the conformations of protein in the transition state ensemble (TSE) is important for studying protein folding. A promising approach pioneered by Vendruscolo [Nature (London) 409, 641 (2001)] to study TSE is to generate conformations that satisfy all constraints imposed by the experimentally measured ϕ values that provide information about the native likeness of the transition states. Faísca [J. Chem. Phys. 129, 095108 (2008)] generated conformations of TSE based on the criterion that, starting from a TS conformation, the probabilities of folding and unfolding are about equal through Markov Chain Monte Carlo (MCMC) simulations. In this study, we use the technique of constrained sequential Monte Carlo method [Lin , J. Chem. Phys. 129, 094101 (2008); Zhang Proteins 66, 61 (2007)] to generate TSE conformations of acylphosphatase of 98 residues that satisfy the ϕ-value constraints, as well as the criterion that each conformation has a folding probability of 0.5 by Monte Carlo simulations. We adopt a two stage process and first generate 5000 contact maps satisfying the ϕ-value constraints. Each contact map is then used to generate 1000 properly weighted conformations. After clustering similar conformations, we obtain a set of properly weighted samples of 4185 candidate clusters. Representative conformation of each of these cluster is then selected and 50 runs of Markov chain Monte Carlo (MCMC) simulation are carried using a regrowth move set. We then select a subset of 1501 conformations that have equal probabilities to fold and to unfold as the set of TSE. These 1501 samples characterize well the distribution of transition state ensemble conformations of acylphosphatase. Compared with previous studies, our approach can access much wider conformational space and can objectively generate conformations that satisfy the ϕ-value constraints and the criterion of 0.5 folding probability without bias. In contrast to previous studies, our results show that transition state conformations are very diverse and are far from nativelike when measured in cartesian root-mean-square deviation (cRMSD): the average cRMSD between TSE conformations and the native structure is 9.4 Å for this short protein, instead of 6 Å reported in previous studies. In addition, we found that the average fraction of native contacts in the TSE is 0.37, with enrichment in native-like β-sheets and a shortage of long range contacts, suggesting such contacts form at a later stage of folding. We further calculate the first passage time of folding of TSE conformations through calculation of physical time associated with the regrowth moves in MCMC simulation through mapping such moves to a Markovian state model, whose transition time was obtained by Langevin dynamics simulations. Our results indicate that despite the large structural diversity of the TSE, they are characterized by similar folding time. Our approach is general and can be used to study TSE in other macromolecules. PMID:21341875
NASA Technical Reports Server (NTRS)
Brown, A. M.
1998-01-01
Accounting for the statistical geometric and material variability of structures in analysis has been a topic of considerable research for the last 30 years. The determination of quantifiable measures of statistical probability of a desired response variable, such as natural frequency, maximum displacement, or stress, to replace experience-based "safety factors" has been a primary goal of these studies. There are, however, several problems associated with their satisfactory application to realistic structures, such as bladed disks in turbomachinery. These include the accurate definition of the input random variables (rv's), the large size of the finite element models frequently used to simulate these structures, which makes even a single deterministic analysis expensive, and accurate generation of the cumulative distribution function (CDF) necessary to obtain the probability of the desired response variables. The research presented here applies a methodology called probabilistic dynamic synthesis (PDS) to solve these problems. The PDS method uses dynamic characteristics of substructures measured from modal test as the input rv's, rather than "primitive" rv's such as material or geometric uncertainties. These dynamic characteristics, which are the free-free eigenvalues, eigenvectors, and residual flexibility (RF), are readily measured and for many substructures, a reasonable sample set of these measurements can be obtained. The statistics for these rv's accurately account for the entire random character of the substructure. Using the RF method of component mode synthesis, these dynamic characteristics are used to generate reduced-size sample models of the substructures, which are then coupled to form system models. These sample models are used to obtain the CDF of the response variable by either applying Monte Carlo simulation or by generating data points for use in the response surface reliability method, which can perform the probabilistic analysis with an order of magnitude less computational effort. Both free- and forced-response analyses have been performed, and the results indicate that, while there is considerable room for improvement, the method produces usable and more representative solutions for the design of realistic structures with a substantial savings in computer time.
Inference Control Mechanism for Statistical Database: Frequency-Imposed Data Distortions.
ERIC Educational Resources Information Center
Liew, Chong K.; And Others
1985-01-01
Introduces two data distortion methods (Frequency-Imposed Distortion, Frequency-Imposed Probability Distortion) and uses a Monte Carlo study to compare their performance with that of other distortion methods (Point Distortion, Probability Distortion). Indications that data generated by these two methods produce accurate statistics and protect…
Simulations of Proton Implantation in Silicon Carbide (SiC)
2016-03-31
ions in matter (SRIM); transport of ions in matter (TRIM); ion energy; implant depth; defect generation; vacancy; backscattered ions; sputtering...are computer simulations based on transport of ions in matter (TRIM), and stopping and range of ions in matter (SRIM). TRIM is a Monte Carlo
Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
USDA-ARS?s Scientific Manuscript database
The increasing prevalence of obesity world-wide has focused attention on the need for accurate body composition assessments, especially of large subjects. However, many body composition measurement systems are calibrated against a single-sized phantom, often based on the standard Reference Man mode...
Monte carlo efficiency calibration of a neutron generator-based total-body irradiator
USDA-ARS?s Scientific Manuscript database
The increasing prevalence of obesity world-wide has focused attention on the need for accurate body composition assessments, especially of large subjects. However, many body composition measurement systems are calibrated against a single-sized phantom, often based on the standard Reference Man mode...
Realization of a Quantum Random Generator Certified with the Kochen-Specker Theorem
NASA Astrophysics Data System (ADS)
Kulikov, Anatoly; Jerger, Markus; Potočnik, Anton; Wallraff, Andreas; Fedorov, Arkady
2017-12-01
Random numbers are required for a variety of applications from secure communications to Monte Carlo simulation. Yet randomness is an asymptotic property, and no output string generated by a physical device can be strictly proven to be random. We report an experimental realization of a quantum random number generator (QRNG) with randomness certified by quantum contextuality and the Kochen-Specker theorem. The certification is not performed in a device-independent way but through a rigorous theoretical proof of each outcome being value indefinite even in the presence of experimental imperfections. The analysis of the generated data confirms the incomputable nature of our QRNG.
Realization of a Quantum Random Generator Certified with the Kochen-Specker Theorem.
Kulikov, Anatoly; Jerger, Markus; Potočnik, Anton; Wallraff, Andreas; Fedorov, Arkady
2017-12-15
Random numbers are required for a variety of applications from secure communications to Monte Carlo simulation. Yet randomness is an asymptotic property, and no output string generated by a physical device can be strictly proven to be random. We report an experimental realization of a quantum random number generator (QRNG) with randomness certified by quantum contextuality and the Kochen-Specker theorem. The certification is not performed in a device-independent way but through a rigorous theoretical proof of each outcome being value indefinite even in the presence of experimental imperfections. The analysis of the generated data confirms the incomputable nature of our QRNG.
A smart Monte Carlo procedure for production costing and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, C.; Stremel, J.
1996-11-01
Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less
Yakimov, Eugene B
2016-06-01
An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.
Calibration of the Top-Quark Monte Carlo Mass.
Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf
2016-04-22
We present a method to establish, experimentally, the relation between the top-quark mass m_{t}^{MC} as implemented in Monte Carlo generators and the Lagrangian mass parameter m_{t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m_{t}^{MC} and an observable sensitive to m_{t}, which does not rely on any prior assumptions about the relation between m_{t} and m_{t}^{MC}. The measured observable is independent of m_{t}^{MC} and can be used subsequently for a determination of m_{t}. The analysis strategy is illustrated with examples for the extraction of m_{t} from inclusive and differential cross sections for hadroproduction of top quarks.
NASA Astrophysics Data System (ADS)
Schwarz, Karsten; Rieger, Heiko
2013-03-01
We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
NASA Astrophysics Data System (ADS)
Plumer, M. L.; Almudallal, A. M.; Mercer, J. I.; Whitehead, J. P.; Fal, T. J.
The kinetic Monte Carlo (KMC) method developed for thermally activated magnetic reversal processes in single-layer recording media has been extended to study dual-layer Exchange Coupled Composition (ECC) media used in current and next generations of disc drives. The attempt frequency is derived from the Langer formalism with the saddle point determined using a variant of Bellman Ford algorithm. Complication (such as stagnation) arising from coupled grains having metastable states are addressed. MH-hysteresis loops are calculated over a wide range of anisotropy ratios, sweep rates and inter-layer coupling parameter. Results are compared with standard micromagnetics at fast sweep rates and experimental results at slow sweep rates.
NASA Technical Reports Server (NTRS)
1982-01-01
A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
On analyzing ordinal data when responses and covariates are both missing at random.
Rana, Subrata; Roy, Surupa; Das, Kalyan
2016-08-01
In many occasions, particularly in biomedical studies, data are unavailable for some responses and covariates. This leads to biased inference in the analysis when a substantial proportion of responses or a covariate or both are missing. Except a few situations, methods for missing data have earlier been considered either for missing response or for missing covariates, but comparatively little attention has been directed to account for both missing responses and missing covariates, which is partly attributable to complexity in modeling and computation. This seems to be important as the precise impact of substantial missing data depends on the association between two missing data processes as well. The real difficulty arises when the responses are ordinal by nature. We develop a joint model to take into account simultaneously the association between the ordinal response variable and covariates and also that between the missing data indicators. Such a complex model has been analyzed here by using the Markov chain Monte Carlo approach and also by the Monte Carlo relative likelihood approach. Their performance on estimating the model parameters in finite samples have been looked into. We illustrate the application of these two methods using data from an orthodontic study. Analysis of such data provides some interesting information on human habit. © The Author(s) 2013.
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
Item Response Theory Equating Using Bayesian Informative Priors.
ERIC Educational Resources Information Center
de la Torre, Jimmy; Patz, Richard J.
This paper seeks to extend the application of Markov chain Monte Carlo (MCMC) methods in item response theory (IRT) to include the estimation of equating relationships along with the estimation of test item parameters. A method is proposed that incorporates estimation of the equating relationship in the item calibration phase. Item parameters from…
Evaluating Item Fit for Multidimensional Item Response Models
ERIC Educational Resources Information Center
Zhang, Bo; Stone, Clement A.
2008-01-01
This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Shock Response and Phase Transitions of MgO at Planetary Impact Conditions.
Root, Seth; Shulenburger, Luke; Lemke, Raymond W; Dolan, Daniel H; Mattsson, Thomas R; Desjarlais, Michael P
2015-11-06
The moon-forming impact and the subsequent evolution of the proto-Earth is strongly dependent on the properties of materials at the extreme conditions generated by this violent collision. We examine the high pressure behavior of MgO, one of the dominant constituents in Earth's mantle, using high-precision, plate impact shock compression experiments performed on Sandia National Laboratories' Z Machine and extensive quantum calculations using density functional theory (DFT) and quantum Monte Carlo (QMC) methods. The combined data span from ambient conditions to 1.2 TPa and 42 000 K, showing solid-solid and solid-liquid phase boundaries. Furthermore our results indicate that under impact the solid and liquid phases coexist for more than 100 GPa, pushing complete melting to pressures in excess of 600 GPa. The high pressure required for complete shock melting has implications for a broad range of planetary collision events.
Shock response and phase transitions of MgO at planetary impact conditions
Root, Seth; Shulenburger, Luke; Lemke, Raymond W.; ...
2015-11-04
The moon-forming impact and the subsequent evolution of the proto-Earth is strongly dependent on the properties of materials at the extreme conditions generated by this violent collision. We examine the high pressure behavior of MgO, one of the dominant constituents in Earth’s mantle, using high-precision, plate impact shock compression experiments performed on Sandia National Laboratories’ Z Machine and extensive quantum calculations using density functional theory (DFT) and quantum Monte Carlo (QMC) methods. The combined data span from ambient conditions to 1.2 TPa and 42,000 K, showing solid-solid and solid-liquid phase boundaries. Furthermore our results indicate that under impact the solidmore » and liquid phases coexist for more than 100 GPa, pushing complete melting to pressures in excess of 600 GPa. Furthermore, the high pressure required for complete shock melting has implications for a broad range of planetary collision events.« less
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
NASA Astrophysics Data System (ADS)
Khee Looe, Hui; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn
2018-04-01
This study aims at developing an optimization strategy for photon-beam dosimetry in magnetic fields using ionization chambers. Similar to the familiar case in the absence of a magnetic field, detectors should be selected under the criterion that their measured 2D signal profiles M(x,y) approximate the absorbed dose to water profiles D(x,y) as closely as possible. Since the conversion of D(x,y) into M(x,y) is known as the convolution with the ‘lateral dose response function’ K(x-ξ, y-η) of the detector, the ideal detector would be characterized by a vanishing magnetic field dependence of this convolution kernel (Looe et al 2017b Phys. Med. Biol. 62 5131–48). The idea of the present study is to find out, by Monte Carlo simulation of two commercial ionization chambers of different size, whether the smaller chamber dimensions would be instrumental to approach this aim. As typical examples, the lateral dose response functions in the presence and absence of a magnetic field have been Monte-Carlo modeled for the new commercial ionization chambers PTW 31021 (‘Semiflex 3D’, internal radius 2.4 mm) and PTW 31022 (‘PinPoint 3D’, internal radius 1.45 mm), which are both available with calibration factors. The Monte-Carlo model of the ionization chambers has been adjusted to account for the presence of the non-collecting part of the air volume near the guard ring. The Monte-Carlo results allow a comparison between the widths of the magnetic field dependent photon fluence response function K M(x-ξ, y-η) and of the lateral dose response function K(x-ξ, y-η) of the two chambers with the width of the dose deposition kernel K D(x-ξ, y-η). The simulated dose and chamber signal profiles show that in small photon fields and in the presence of a 1.5 T field the distortion of the chamber signal profile compared with the true dose profile is weakest for the smaller chamber. The dose responses of both chambers at large field size are shown to be altered by not more than 2% in magnetic fields up to 1.5 T for all three investigated chamber orientations.
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei
2011-10-01
High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.
Four receptor-oriented source apportionment models were evaluated by applying them to simulated personal exposure data for select volatile organic compounds (VOCs) that were generated by Monte Carlo sampling from known source contributions and profiles. The exposure sources mo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.
Bootstrapping Confidence Intervals for Robust Measures of Association.
ERIC Educational Resources Information Center
King, Jason E.
A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…
Parental GCA testing: how many crosses per parent?
G.R. Johnson
1998-01-01
The impact of increasing the number of crosses per parent (k) on the efficiency of roguing seed orchards (backwards selection, i.e., reselection of parents) was examined by using Monte Carlo simulation. Efficiencies were examined in light of advanced-generation Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) tree improvement programs where...
Baryon production from cluster hadronisation
NASA Astrophysics Data System (ADS)
Gieseke, Stefan; Kirchgaeßer, Patrick; Plätzer, Simon
2018-02-01
We present an extension to the colour reconnection model in the Monte Carlo event generator Herwig to account for the production of baryons and compare it to a series of observables for soft physics. The new model is able to improve the description of charged-particle multiplicities and hadron flavour observables in pp collisions.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
NASA Astrophysics Data System (ADS)
Särkimäki, K.; Hirvijoki, E.; Terävä, J.
2018-01-01
We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.
Modeling radiation loads in the ILC main linac and a novel approach to treat dark current
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mokhov, Nilolai V.; Rakhno, Igor L.; Tropin, Igor S.
Electromagnetic and hadron showers generated by electrons of dark current (DC) can represent a significant radiation threat to the ILC linac equipment and personnel. In this study, a commissioning scenario is analysed which is considered as the worst-case scenario for the main linac regarding the DC contribution to the radiation environment in the tunnel. A normal operation scenario is analysed as well. An emphasis is made on radiation load to sensitive electronic equipment—cryogenic thermometers inside the cryomodules. Prompt and residual dose rates in the ILC main linac tunnels were also calculated in these new high-statistics runs. A novel approach wasmore » developed—as a part of general purpose Monte Carlo code MARS15—to model generation, acceleration and transport of DC electrons in electromagnetic fields inside SRF cavities. Comparisons were made with a standard approach when a set of pre-calculated DC electron trajectories is used, with a proper normalization, as a source for Monte Carlo modelling. Results of MARS15 Monte Carlo calculations, performed for the current main linac tunnel design, reveal that the peak absorbed dose in the cryogenic thermometers in the main tunnel for 20 years of operation is about 0.8 MGy. The calculated contact residual dose on cryomodules and tunnel walls in the main tunnel for typical irradiation and cooling conditions is 0.1 and 0.01 mSv/hr, respectively.« less
Pattern Recognition Control Design
NASA Technical Reports Server (NTRS)
Gambone, Elisabeth A.
2018-01-01
Spacecraft control algorithms must know the expected vehicle response to any command to the available control effectors, such as reaction thrusters or torque devices. Spacecraft control system design approaches have traditionally relied on the estimated vehicle mass properties to determine the desired force and moment, as well as knowledge of the effector performance to efficiently control the spacecraft. A pattern recognition approach was used to investigate the relationship between the control effector commands and spacecraft responses. Instead of supplying the approximated vehicle properties and the thruster performance characteristics, a database of information relating the thruster ring commands and the desired vehicle response was used for closed-loop control. A Monte Carlo simulation data set of the spacecraft dynamic response to effector commands was analyzed to establish the influence a command has on the behavior of the spacecraft. A tool developed at NASA Johnson Space Center to analyze flight dynamics Monte Carlo data sets through pattern recognition methods was used to perform this analysis. Once a comprehensive data set relating spacecraft responses with commands was established, it was used in place of traditional control methods and gains set. This pattern recognition approach was compared with traditional control algorithms to determine the potential benefits and uses.
Vegetative response to water availability on the San Carlos Apache Reservation
Petrakis, Roy; Wu, Zhuoting; McVay, Jason; Middleton, Barry R.; Dye, Dennis G.; Vogel, John M.
2016-01-01
On the San Carlos Apache Reservation in east-central Arizona, U.S.A., vegetation types such as ponderosa pine forests, pinyon-juniper woodlands, and grasslands have significant ecological, cultural, and economic value for the Tribe. This value extends beyond the tribal lands and across the Western United States. Vegetation across the Southwestern United States is susceptible to drought conditions and fluctuating water availability. Remotely sensed vegetation indices can be used to measure and monitor spatial and temporal vegetative response to fluctuating water availability conditions. We used the Moderate Resolution Imaging Spectroradiometer (MODIS)-derived Modified Soil Adjusted Vegetation Index II (MSAVI2) to measure the condition of three dominant vegetation types (ponderosa pine forest, woodland, and grassland) in response to two fluctuating environmental variables: precipitation and the Standardized Precipitation Evapotranspiration Index (SPEI). The study period covered 2002 through 2014 and focused on a region within the San Carlos Apache Reservation. We determined that grassland and woodland had a similar moderate to strong, year-round, positive relationship with precipitation as well as with summer SPEI. This suggests that these vegetation types respond negatively to drought conditions and are more susceptible to initial precipitation deficits. Ponderosa pine forest had a comparatively weaker relationship with monthly precipitation and summer SPEI, indicating that it is more buffered against short-term drought conditions. This research highlights the response of multiple, dominant vegetation types to seasonal and inter-annual water availability. This research demonstrates that multi-temporal remote sensing imagery can be an effective tool for the large scale detection of vegetation response to adverse impacts from climate change and support potential management practices such as increased monitoring and management of drought-affected areas. Different vegetation types displayed various responses to water availability, further highlighting the need for individual management plans for forest and woodland, especially considering the projected drier conditions in the Southwest U.S. and other arid or semi-arid regions around the world.
Monte-Carlo Event Generators for Jet Modification in d(p)-A and A-A Collisions
NASA Astrophysics Data System (ADS)
Kordell, Michael C., III
This work outlines methods to use jet simulations to study both initial and final state nuclear effects in heavy-ion collisions. To study the initial state of heavy-ion collisions, the production of jets and high momentum hadrons from jets, produced in deuteron (d)-Au collisions at the relativistic heavy-ion collider (RHIC) and proton (p)- Pb collisions at the large hadron collider (LHC) are studied as a function of centrality, a measure of the impact parameter of the collision. A modified version of the event generator PYTHIA, widely used to simulate p-p collisions, is used in conjunction with a nuclear Monte-Carlo event generator which simulates the locations of the nucleons within a large nucleus. It is demonstrated how events with a hard jet may be simulated, in such a way that the parton distribution function of the projectile is frozen during its interaction with the extended nucleus. Using this approach, it is demonstrated that the puzzling enhancement seen in peripheral events at RHIC and the LHC, as well as the suppression seen in central events at the LHC are mainly due to mis-binning of central and semi-central events, containing a jet, as peripheral events. This occurs due to the suppression of soft particle production away from the jet, caused by the depletion of energy available in a nucleon of the deuteron (in d-Au at RHIC) or in the proton (in p-Pb at LHC), after the production of a hard jet. In conclusion, partonic correlations built out of simple energy conservation are responsible for such an effect, though these are sampled at the hard scale of jet production and, as such, represent smaller states. To study final state nuclear effects, the modification of hard jets in the Quark Gluon Plasma (QGP) is simulated using the MATTER event generator. Based on the higher twist formalism of energy loss, the MATTER event generator simulates the evolution of highly virtual partons through a medium. These partons sampled from an underlying PYTHIA kernel undergo splitting through a combination of vacuum and medium induced emission. The momentum exchange with the medium is simulated via the jet transport coefficient q̂, which is assumed to scale with the entropy density at a given location in the medium. The entropy density is obtained from a relativistic viscous fluid dynamics simulation (VISH2+1D) in 2+1 space time dimensions. Results for jet and hadron observables are presented using an independent fragmentation model.
Accelerating Pseudo-Random Number Generator for MCNP on GPU
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu
2010-09-01
Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D; O’Connell, D; Lamb, J
Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment weremore » generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments.« less
Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Sjenitzer, Bart L.
2014-06-01
To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, J; Pelletier, C; Lee, C
Purpose: Organ doses for the Hodgkin’s lymphoma patients treated with cobalt-60 radiation were estimated using an anthropomorphic model and Monte Carlo modeling. Methods: A cobalt-60 treatment unit modeled in the BEAMnrc Monte Carlo code was used to produce phase space data. The Monte Carlo simulation was verified with percent depth dose measurement in water at various field sizes. Radiation transport through the lung blocks were modeled by adjusting the weights of phase space data. We imported a precontoured adult female hybrid model and generated a treatment plan. The adjusted phase space data and the human model were imported to themore » XVMC Monte Carlo code for dose calculation. The organ mean doses were estimated and dose volume histograms were plotted. Results: The percent depth dose agreement between measurement and calculation in water phantom was within 2% for all field sizes. The mean organ doses of heart, left breast, right breast, and spleen for the selected case were 44.3, 24.1, 14.6 and 3.4 Gy, respectively with the midline prescription dose of 40.0 Gy. Conclusion: Organ doses were estimated for the patient group whose threedimensional images are not available. This development may open the door to more accurate dose reconstruction and estimates of uncertainties in secondary cancer risk for Hodgkin’s lymphoma patients. This work was partially supported by the intramural research program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.« less
Backscatter factors and mass energy-absorption coefficient ratios for diagnostic radiology dosimetry
NASA Astrophysics Data System (ADS)
Benmakhlouf, Hamza; Bouchard, Hugo; Fransson, Annette; Andreo, Pedro
2011-11-01
Backscatter factors, B, and mass energy-absorption coefficient ratios, (μen/ρ)w, air, for the determination of the surface dose in diagnostic radiology were calculated using Monte Carlo simulations. The main purpose was to extend the range of available data to qualities used in modern x-ray techniques, particularly for interventional radiology. A comprehensive database for mono-energetic photons between 4 and 150 keV and different field sizes was created for a 15 cm thick water phantom. Backscattered spectra were calculated with the PENELOPE Monte Carlo system, scoring track-length fluence differential in energy with negligible statistical uncertainty; using the Monte Carlo computed spectra, B factors and (μen/ρ)w, air were then calculated numerically for each energy. Weighted averaging procedures were subsequently used to convolve incident clinical spectra with mono-energetic data. The method was benchmarked against full Monte Carlo calculations of incident clinical spectra obtaining differences within 0.3-0.6%. The technique used enables the calculation of B and (μen/ρ)w, air for any incident spectrum without further time-consuming Monte Carlo simulations. The adequacy of the extended dosimetry data to a broader range of clinical qualities than those currently available, while keeping consistency with existing data, was confirmed through detailed comparisons. Mono-energetic and spectra-averaged values were compared with published data, including those in ICRU Report 74 and IAEA TRS-457, finding average differences of 0.6%. Results are provided in comprehensive tables appropriated for clinical use. Additional qualities can easily be calculated using a designed GUI interface in conjunction with software to generate incident photon spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lessard, Francois; Archambault, Louis; Plamondon, Mathieu
Purpose: Photon dosimetry in the kilovolt (kV) energy range represents a major challenge for diagnostic and interventional radiology and superficial therapy. Plastic scintillation detectors (PSDs) are potentially good candidates for this task. This study proposes a simple way to obtain accurate correction factors to compensate for the response of PSDs to photon energies between 80 and 150 kVp. The performance of PSDs is also investigated to determine their potential usefulness in the diagnostic energy range. Methods: A 1-mm-diameter, 10-mm-long PSD was irradiated by a Therapax SXT 150 unit using five different beam qualities made of tube potentials ranging from 80more » to 150 kVp and filtration thickness ranging from 0.8 to 0.2 mmAl + 1.0 mmCu. The light emitted by the detector was collected using an 8-m-long optical fiber and a polychromatic photodiode, which converted the scintillation photons to an electrical current. The PSD response was compared with the reference free air dose rate measured with a calibrated Farmer NE2571 ionization chamber. PSD measurements were corrected using spectra-weighted corrections, accounting for mass energy-absorption coefficient differences between the sensitive volumes of the ionization chamber and the PSD, as suggested by large cavity theory (LCT). Beam spectra were obtained from x-ray simulation software and validated experimentally using a CdTe spectrometer. Correction factors were also obtained using Monte Carlo (MC) simulations. Percent depth dose (PDD) measurements were compensated for beam hardening using the LCT correction method. These PDD measurements were compared with uncorrected PSD data, PDD measurements obtained using Gafchromic films, Monte Carlo simulations, and previous data. Results: For each beam quality used, the authors observed an increase of the energy response with effective energy when no correction was applied to the PSD response. Using the LCT correction, the PSD response was almost energy independent, with a residual 2.1% coefficient of variation (COV) over the 80-150-kVp energy range. Monte Carlo corrections reduced the COV to 1.4% over this energy range. All PDD measurements were in good agreement with one another except for the uncorrected PSD data, in which an over-response was observed with depth (13% at 10 cm with a 100 kVp beam), showing that beam hardening had a non-negligible effect on the PSD response. A correction based on LCT compensated very well for this effect, reducing the over-response to 3%.Conclusion: In the diagnostic energy range, PSDs show high-energy dependence, which can be corrected using spectra-weighted mass energy-absorption coefficients, showing no considerable sign of quenching between these energies. Correction factors obtained by Monte Carlo simulations confirm that the approximations made by LCT corrections are valid. Thus, PSDs could be useful for real-time dosimetry in radiology applications.« less
Ogawara, R; Ishikawa, M
2016-07-01
The anode pulse of a photomultiplier tube (PMT) coupled with a scintillator is used for pulse shape discrimination (PSD) analysis. We have developed a novel emulation technique for the PMT anode pulse based on optical photon transport and a PMT response function. The photon transport was calculated using Geant4 Monte Carlo code and the response function with a BC408 organic scintillator. The obtained percentage RMS value of the difference between the measured and simulated pulse with suitable scintillation properties using GSO:Ce (0.4, 1.0, 1.5 mol%), LaBr3:Ce and BGO scintillators were 2.41%, 2.58%, 2.16%, 2.01%, and 3.32%, respectively. The proposed technique demonstrates high reproducibility of the measured pulse and can be applied to simulation studies of various radiation measurements.
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
Paulin, A; Schneider, M; Dron, F; Woehrle, F
2018-02-01
Population pharmacokinetic of marbofloxacin was investigated with 52 plasma concentration-time profiles obtained after intramuscular administration of Forcyl® in cattle. Animal's status, pre-ruminant, ruminant, or dairy cow, was retained as a relevant covariate for clearance. Monte Carlo simulations were performed using a stratification by status, and 1000 virtual disposition curves were generated in each bovine subpopulation for the recommended dosage regimen of 10 mg/kg as a single injection. The probability of target attainment (PTA) of pharmacokinetic/pharmacodynamic (PK/PD) ratios associated with clinical efficacy and prevention of resistance was determined in each simulated subpopulation. The cumulative fraction of response (CFR) of animals achieving a PK/PD ratio predictive of positive clinical outcome was then calculated for the simulated dosage regimen, taking into account the minimum inhibitory concentration (MIC) distribution of Pasteurella multocida, Mannheimia haemolytica, and Histophilus somni. When considering a ratio of AUC 0-24 hr /MIC (area under the curve/minimum inhibitory concentration) greater than 125 hr, CFRs ranging from 85% to 100% against the three Pasteurellaceae in each bovine subpopulation were achieved. The PTA of the PK/PD threshold reflecting the prevention of resistances was greater than 90% up to MPC (mutant prevention concentration) values of 1 μg/ml in pre-ruminants and ruminants and 0.5 μg/ml in dairy cows. © 2017 The Authors. Journal of Veterinary Pharmacology and Therapeutics Published by John Wiley & Sons Ltd.
X-ray microanalysis of porous materials using Monte Carlo simulations.
Poirier, Dominique; Gauvin, Raynald
2011-01-01
Quantitative X-ray microanalysis models, such as ZAF or φ(ρz) methods, are normally based on solid, flat-polished specimens. This limits their use in various domains where porous materials are studied, such as powder metallurgy, catalysts, foams, etc. Previous experimental studies have shown that an increase in porosity leads to a deficit in X-ray emission for various materials, such as graphite, Cr(2) O(3) , CuO, ZnS (Ichinokawa et al., '69), Al(2) O(3) , and Ag (Lakis et al., '92). However, the mechanisms responsible for this decrease are unclear. The porosity by itself does not explain the loss in intensity, other mechanisms have therefore been proposed, such as extra energy loss by the diffusion of electrons by surface plasmons generated at the pores-solid interfaces, surface roughness, extra charging at the pores-solid interface, or carbon diffusion in the pores. However, the exact mechanism is still unclear. In order to better understand the effects of porosity on quantitative microanalysis, a new approach using Monte Carlo simulations was developed by Gauvin (2005) using a constant pore size. In this new study, the X-ray emissions model was modified to include a random log normal distribution of pores size in the simulated materials. This article presents, after a literature review of the previous works performed about X-ray microanalysis of porous materials, some of the results obtained with Gauvin's modified model. They are then compared with experimental results. Copyright © 2011 Wiley Periodicals, Inc.
Self-optimizing Monte Carlo method for nuclear well logging simulation
NASA Astrophysics Data System (ADS)
Liu, Lianyan
1997-09-01
In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very good initial importance map is always available after the first point has been calculated, high computing efficiency is maintained. The availability of contributon maps provides an easy way of understanding the logging measurement and analyzing for the depth of investigation.
Probabilistic wind/tornado/missile analyses for hazard and fragility evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.J.; Reich, M.
Detailed analysis procedures and examples are presented for the probabilistic evaluation of hazard and fragility against high wind, tornado, and tornado-generated missiles. In the tornado hazard analysis, existing risk models are modified to incorporate various uncertainties including modeling errors. A significant feature of this paper is the detailed description of the Monte-Carlo simulation analyses of tornado-generated missiles. A simulation procedure, which includes the wind field modeling, missile injection, solution of flight equations, and missile impact analysis, is described with application examples.
Analysis of BaBar data for three meson tau decay modes using the Tauola generator
Shekhovtsova, Olga
2014-11-24
The hadronic current for the τ⁻ → π⁻π⁺π⁻ν τ decay calculated in the framework of the Resonance Chiral Theory with an additional modification to include the σ meson is described. In addition, implementation into the Monte Carlo generator Tauola and fitting strategy to get the model parameters using the one-dimensional distributions are discussed. The results of the fit to one-dimensional mass invariant spectrum of the BaBar data are presented.
GPS Radiation Measurements: Instrument Modeling and Simulation (Project w14_gpsradiation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, John P.
The following topics are covered: electron response simulations and typical calculated response. Monte Carlo calculations of the response of future charged particle instruments (dosimeters) intended to measure the flux of charged particles in space were performed. The electron channels are called E1- E11 – each of which is intended to detect a different range of electron energies. These instruments are on current and future GPS satellites.
MC generator HARDPING: Nuclear effects in hard interactions of leptons and hadrons with nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berdnikov, Ya. A.; Ivanov, A. E.; Kim, V. T.
2016-01-22
Hadron and lepton production in hard interaction of high-energy particles with nuclei are considered in context of developing of Monte Carlo generator HARDPING (Hard Probe Interaction Generator). Such effects as energy losses and multiple re-scattering initial and produced hadrons and their constituents are taken into account. These effects are implemented in current version of generator HARDPING. Data of experiments HERMES on hadron production in lepton-nuclei collisions and E866 on muon pair production in proton-nuclei collisions were described with current version of generator HARDPING. Predictions from recent version HARPING 3.0 for lepton pairs production at proton beam energy I20 GeV aremore » presented.« less
Blunt, Nick S.; Neuscamman, Eric
2017-11-16
We present a simple and efficient wave function ansatz for the treatment of excited charge-transfer states in real-space quantum Monte Carlo methods. Using the recently-introduced variation-after-response method, this ansatz allows a crucial orbital optimization step to be performed beyond a configuration interaction singles expansion, while only requiring calculation of two Slater determinant objects. As a result, we demonstrate this ansatz for the illustrative example of the stretched LiF molecule, for a range of excited states of formaldehyde, and finally for the more challenging ethylene-tetrafluoroethylene molecule.
The Consequences of Ignoring Item Parameter Drift in Longitudinal Item Response Models
ERIC Educational Resources Information Center
Lee, Wooyeol; Cho, Sun-Joo
2017-01-01
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
ERIC Educational Resources Information Center
Wang, Wen-Chung
2004-01-01
The Pearson correlation is used to depict effect sizes in the context of item response theory. Amultidimensional Rasch model is used to directly estimate the correlation between latent traits. Monte Carlo simulations were conducted to investigate whether the population correlation could be accurately estimated and whether the bootstrap method…
Sung, Wonmo; Park, Jong In; Kim, Jung-in; Carlson, Joel; Ye, Sung-Joon
2017-01-01
This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans. PMID:28493940
Sung, Wonmo; Park, Jong In; Kim, Jung-In; Carlson, Joel; Ye, Sung-Joon; Park, Jong Min
2017-01-01
This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans.
A Monte Carlo study of fluorescence generation probability in a two-layered tissue model
NASA Astrophysics Data System (ADS)
Milej, Daniel; Gerega, Anna; Wabnitz, Heidrun; Liebert, Adam
2014-03-01
It was recently reported that the time-resolved measurement of diffuse reflectance and/or fluorescence during injection of an optical contrast agent may constitute a basis for a technique to assess cerebral perfusion. In this paper, we present results of Monte Carlo simulations of the propagation of excitation photons and tracking of fluorescence photons in a two-layered tissue model mimicking intra- and extracerebral tissue compartments. Spatial 3D distributions of the probability that the photons were converted from excitation to emission wavelength in a defined voxel of the medium (generation probability) during their travel between source and detector were obtained for different optical properties in intra- and extracerebral tissue compartments. It was noted that the spatial distribution of the generation probability depends on the distribution of the fluorophore in the medium and is influenced by the absorption of the medium and of the fluorophore at excitation and emission wavelengths. Simulations were also carried out for realistic time courses of the dye concentration in both layers. The results of the study show that the knowledge of the absorption properties of the medium at excitation and emission wavelengths is essential for the interpretation of the time-resolved fluorescence signals measured on the surface of the head.
A new model for approximating RNA folding trajectories and population kinetics
NASA Astrophysics Data System (ADS)
Kirkpatrick, Bonnie; Hajiaghayi, Monir; Condon, Anne
2013-01-01
RNA participates both in functional aspects of the cell and in gene regulation. The interactions of these molecules are mediated by their secondary structure which can be viewed as a planar circle graph with arcs for all the chemical bonds between pairs of bases in the RNA sequence. The problem of predicting RNA secondary structure, specifically the chemically most probable structure, has many useful and efficient algorithms. This leaves RNA folding, the problem of predicting the dynamic behavior of RNA structure over time, as the main open problem. RNA folding is important for functional understanding because some RNA molecules change secondary structure in response to interactions with the environment. The full RNA folding model on at most O(3n) secondary structures is the gold standard. We present a new subset approximation model for the full model, give methods to analyze its accuracy and discuss the relative merits of our model as compared with a pre-existing subset approximation. The main advantage of our model is that it generates Monte Carlo folding pathways with the same probabilities with which they are generated under the full model. The pre-existing subset approximation does not have this property.
Specific loss power in superparamagnetic hyperthermia: nanofluid versus composite
NASA Astrophysics Data System (ADS)
Osaci, M.; Cacciola, M.
2017-01-01
Currently, the magnetic hyperthermia induced by nanoparticles is of great interest in biomedical applications. In the literature, we can find a lot of models for magnetic hyperthermia, but many of them do not give importance to a significant detail, such as the geometry of nanoparticle positions in the system. Usually, a nanofluid is treated by considering random positions of the nanoparticles, geometry that is actually characteristic to the composite nanoparticles. To assess the error which is frequently made, in this paper we propose a comparative analysis between the specific loss power (SLP) in case of a nanofluid and the SLP in case of a composite with magnetic nanoparticles. We are going to use a superparamagnetic hyperthermia model based on the improved model for calculating the Néel relaxation time in a magnetic field oblique to the nanoparticle magnetic anisotropy axes, and on the improved theoretical model LRT (linear response theory) for SLP. To generate the nanoparticle geometry in the system, we are going to apply a Monte Carlo method to a nanofluid, by minimising the interaction potentials in liquid medium and, for a composite environment, a method for generating random positions of the nanoparticles in a given volume.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
On the modeling of the 2010 Gulf of Mexico Oil Spill
NASA Astrophysics Data System (ADS)
Mariano, A. J.; Kourafalou, V. H.; Srinivasan, A.; Kang, H.; Halliwell, G. R.; Ryan, E. H.; Roffer, M.
2011-09-01
Two oil particle trajectory forecasting systems were developed and applied to the 2010 Deepwater Horizon Oil Spill in the Gulf of Mexico. Both systems use ocean current fields from high-resolution numerical ocean circulation model simulations, Lagrangian stochastic models to represent unresolved sub-grid scale variability to advect oil particles, and Monte Carlo-based schemes for representing uncertain biochemical and physical processes. The first system assumes two-dimensional particle motion at the ocean surface, the oil is in one state, and the particle removal is modeled as a Monte Carlo process parameterized by a one number removal rate. Oil particles are seeded using both initial conditions based on observations and particles released at the location of the Maconda well. The initial conditions (ICs) of oil particle location for the two-dimensional surface oil trajectory forecasts are based on a fusing of all available information including satellite-based analyses. The resulting oil map is digitized into a shape file within which a polygon filling software generates longitude and latitude with variable particle density depending on the amount of oil present in the observations for the IC. The more complex system assumes three (light, medium, heavy) states for the oil, each state has a different removal rate in the Monte Carlo process, three-dimensional particle motion, and a particle size-dependent oil mixing model. Simulations from the two-dimensional forecast system produced results that qualitatively agreed with the uncertain "truth" fields. These simulations validated the use of our Monte Carlo scheme for representing oil removal by evaporation and other weathering processes. Eulerian velocity fields for predicting particle motion from data-assimilative models produced better particle trajectory distributions than a free running model with no data assimilation. Monte Carlo simulations of the three-dimensional oil particle trajectory, whose ensembles were generated by perturbing the size of the oil particles and the fraction in a given size range that are released at depth, the two largest unknowns in this problem. 36 realizations of the model were run with only subsurface oil releases. An average of these results yields that after three months, about 25% of the oil remains in the water column and that most of the oil is below 800 m.
Monte Carlo based, patient-specific RapidArc QA using Linac log files.
Teke, Tony; Bergman, Alanah M; Kwa, William; Gill, Bradford; Duzenli, Cheryl; Popescu, I Antoniu
2010-01-01
A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC simulations, the goal of this article is to (a) confirm that adequate sampling is used in the RapidArc optimization algorithm (177 static gantry angles) and (b) to assess the physical machine performance [gantry angle and monitor unit (MU) delivery accuracy]. Ten clinically acceptable RapidArc treatment plans were generated for various tumor sites and delivered to a water-equivalent cylindrical phantom on the treatment unit. Three Monte Carlo simulations were performed to calculate dose to the CT phantom image set: (a) One using a series of static gantry angles defined by 177 control points with treatment planning system (TPS) MLC control files (planning files), (b) one using continuous gantry rotation with TPS generated MLC control files, and (c) one using continuous gantry rotation with actual Linac delivery log files. Monte Carlo simulated dose distributions are compared to both ionization chamber point measurements and with RapidArc TPS calculated doses. The 3D dose distributions were compared using a 3D gamma-factor analysis, employing a 3%/3 mm distance-to-agreement criterion. The dose difference between MC simulations, TPS, and ionization chamber point measurements was less than 2.1%. For all plans, the MC calculated 3D dose distributions agreed well with the TPS calculated doses (gamma-factor values were less than 1 for more than 95% of the points considered). Machine performance QA was supplemented with an extensive DynaLog file analysis. A DynaLog file analysis showed that leaf position errors were less than 1 mm for 94% of the time and there were no leaf errors greater than 2.5 mm. The mean standard deviation in MU and gantry angle were 0.052 MU and 0.355 degrees, respectively, for the ten cases analyzed. The accuracy and flexibility of the Monte Carlo based RapidArc QA system were demonstrated. Good machine performance and accurate dose distribution delivery of RapidArc plans were observed. The sampling used in the TPS optimization algorithm was found to be adequate.
Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao
2017-03-15
As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spectral response characterization of CdTe sensors of different pixel size with the IBEX ASIC
NASA Astrophysics Data System (ADS)
Zambon, P.; Radicci, V.; Trueb, P.; Disch, C.; Rissi, M.; Sakhelashvili, T.; Schneebeli, M.; Broennimann, C.
2018-06-01
We characterized the spectral response of CdTe sensors with different pixel sizes - namely 75, 150 and 300 μm - bonded to the latest generation IBEX single photon counting ASIC developed at DECTRIS, to detect monochromatic X-ray energy in the range 10-60 keV. We present a comparison of pulse height spectra recorded for several energies, showing the dependence on the pixel size of the non-trivial atomic fluorescence and charge sharing effects that affect the detector response. The extracted energy resolution, in terms of full width at half maximum or FWHM, ranges from 1.5 to 4 keV according to the pixel size and chip configuration. We devoted a careful analysis to the Quantum Efficiency and to the Spectral Efficiency - a newly-introduced measure that quantifies the impact of fluorescence and escape phenomena on the spectrum integrity in high- Z material based detectors. We then investigated the influence of the photon flux on the aforementioned quantities up to 180 ṡ 106 cts/s/mm2 and 50 ṡ 106 cts/s/mm2 for the 150 μm and 300 μm pixel case, respectively. Finally, we complemented the experimental data with analytical and with Monte Carlo simulations - taking into account the stochastic nature of atomic fluorescence - with an excellent agreement.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Moorthy, Jayashree
1995-01-01
A time-domain study of the random response of a laminated plate subjected to combined acoustic and thermal loads is carried out. The features of this problem also include given uniform static inplane forces. The formulation takes into consideration a possible initial imperfection in the flatness of the plate. High decibel sound pressure levels along with high thermal gradients across thickness drive the plate response into nonlinear regimes. This calls for the analysis to use von Karman large deflection strain-displacement relationships. A finite element model that combines the von Karman strains with the first-order shear deformation plate theory is developed. The development of the analytical model can accommodate an anisotropic composite laminate built up of uniformly thick layers of orthotropic, linearly elastic laminae. The global system of finite element equations is then reduced to a modal system of equations. Numerical simulation using a single-step algorithm in the time-domain is then carried out to solve for the modal coordinates. Nonlinear algebraic equations within each time-step are solved by the Newton-Raphson method. The random gaussian filtered white noise load is generated using Monte Carlo simulation. The acoustic pressure distribution over the plate is capable of accounting for a grazing incidence wavefront. Numerical results are presented to study a variety of cases.
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; An, M.; Andrei, C.; Andrews, H. A.; Andronic, A.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Anwar, R.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barioglio, L.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonora, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buitron, S. A. I.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Capon, A. A.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; del Valle, Z. Conesa; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crkovská, J.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; De Souza, R. D.; Degenhardt, H. F.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Di Ruzza, B.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Duggal, A. K.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Esumi, S.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Garg, P.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Ducati, M. B. Gay; Germain, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grion, N.; Gronefeld, J. M.; Grosa, F.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Grull, F. R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Hladky, J.; Horak, D.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Isakov, V.; Islam, M. S.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jercic, M.; Bustamante, R. T. Jimenez; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kielbowicz, M. M.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lavicka, R.; Lazaridis, L.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Litichevskyi, V.; Ljunggren, H. M.; Llope, W. J.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Loncar, P.; Lopez, X.; Torres, E. López; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Cervantes, I. Maldonado; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Mathis, A. M.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Mishra, T.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Negrao De Oliveira, R. A.; Nellen, L.; Nesbo, S. V.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Ohlson, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Pagano, P.; Paić, G.; Pal, S. K.; Palni, P.; Pan, J.; Pandey, A. K.; Panebianco, S.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, J.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira, L. G.; Pereira Da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Pezzi, R. P.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M. O.; Schmidt, M.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spiriti, E.; Sputowska, I.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Muñoz, G. Tejeda; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thomas, D.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Trzeciak, B. A.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Limón, S. Vergara; Vernet, R.; Vértesi, R.; Vickovic, L.; Vigolo, S.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Windelband, B.; Witt, W. E.; Yalcin, S.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zimmermann, S.; Zinovjev, G.; Zmeskal, J.
2017-08-01
Two-particle angular correlations were measured in pp collisions at √{s} = 7 TeV for pions, kaons, protons, and lambdas, for all particle/anti-particle combinations in the pair. Data for mesons exhibit an expected peak dominated by effects associated with mini-jets and are well reproduced by general purpose Monte Carlo generators. However, for baryon-baryon and anti-baryon-anti-baryon pairs, where both particles have the same baryon number, a near-side anti-correlation structure is observed instead of a peak. This effect is interpreted in the context of baryon production mechanisms in the fragmentation process. It currently presents a challenge to Monte Carlo models and its origin remains an open question.
Kinetic Monte Carlo Simulations of Oxygen Diffusion in Environmental Barrier Coating Materials
NASA Technical Reports Server (NTRS)
Good, Brian S.
2017-01-01
Ceramic Matrix Composite (CMC) materials are of interest for use in next-generation turbine engine components, offering a number of significant advantages, including reduced weight and high operating temperatures. However, in the hot environment in which such components operate, the presence of water vapor can lead to corrosion and recession, limiting the useful life of the components. Such degradation can be reduced through the use of Environmental Barrier Coatings (EBCs) that limit the amount of oxygen and water vapor reaching the component. Candidate EBC materials include Yttrium and Ytterbium silicates. In this work we present results of kinetic Monte Carlo (kMC) simulations of oxygen diffusion, via the vacancy mechanism, in Yttrium and Ytterbium disilicates, along with a brief discussion of interstitial diffusion.
NASA Astrophysics Data System (ADS)
Burlon, Alejandro A.; Girola, Santiago; Valda, Alejandro A.; Minsky, Daniel M.; Kreiner, Andrés J.
2010-08-01
In the frame of the construction of a Tandem Electrostatic Quadrupole Accelerator facility devoted to the Accelerator-Based Boron Neutron Capture Therapy, a Beam Shaping Assembly has been characterized by means of Monte-Carlo simulations and measurements. The neutrons were generated via the 7Li(p, n)7Be reaction by irradiating a thick LiF target with a 2.3 MeV proton beam delivered by the TANDAR accelerator at CNEA. The emerging neutron flux was measured by means of activation foils while the beam quality and directionality was evaluated by means of Monte Carlo simulations. The parameters show compliance with those suggested by IAEA. Finally, an improvement adding a beam collimator has been evaluated.
Size and habit evolution of PETN crystals - a lattice Monte Carlo study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zepeda-Ruiz, L A; Maiti, A; Gee, R
2006-02-28
Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphologymore » as a function of the rate of particle addition relative to diffusion.« less
NASA Astrophysics Data System (ADS)
Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.
2018-07-01
Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.
Self-learning Monte Carlo method and cumulative update in fermion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Junwei; Shen, Huitao; Qi, Yang
2017-06-07
In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less
SAXS study of ion tracks in San Carlos olivine and Durango apatite
NASA Astrophysics Data System (ADS)
Afra, B.; Rodriguez, M. D.; Lang, M.; Ewing, R. C.; Kirby, N.; Trautmann, C.; Kluth, P.
2012-09-01
Ion tracks were generated in crystalline San Carlos olivine (Mg,Fe)2SiO4 and Durango apatite Ca10(PO4)6F2 using different heavy ions (58Ni, 101Ru, 129Xe, 197Au, and 238U) with energies ranging between 185 MeV and 2.6 GeV. The tracks and their annealing behavior were studied by means of synchrotron based small angle X-ray scattering in combination with in situ annealing. Track radii vary as a function of electronic energy loss but are very similar in both minerals. Furthermore, the annealing behavior of the track radii has been investigated and preliminary results reveal a lower recovery rate of the damaged area in olivine compared with apatite.
Distance between configurations in Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Fukuma, Masafumi; Matsumoto, Nobuyuki; Umeda, Naoya
2017-12-01
For a given Markov chain Monte Carlo algorithm we introduce a distance between two configurations that quantifies the difficulty of transition from one configuration to the other configuration. We argue that the distance takes a universal form for the class of algorithms which generate local moves in the configuration space. We explicitly calculate the distance for the Langevin algorithm, and show that it certainly has desired and expected properties as distance. We further show that the distance for a multimodal distribution gets dramatically reduced from a large value by the introduction of a tempering method. We also argue that, when the original distribution is highly multimodal with large number of degenerate vacua, an anti-de Sitter-like geometry naturally emerges in the extended configuration space.
NASA Technical Reports Server (NTRS)
Marshall, C. J.; Marshall, P. W.; Howe, C. L.; Reed, R. A.; Weller, R. A.; Mendenhall, M.; Waczynski, A.; Ladbury, R.; Jordan, T. M.
2007-01-01
This paper presents a combined Monte Carlo and analytic approach to the calculation of the pixel-to-pixel distribution of proton-induced damage in a HgCdTe sensor array and compares the results to measured dark current distributions after damage by 63 MeV protons. The moments of the Coulombic, nuclear elastic and nuclear inelastic damage distributions were extracted from Monte Carlo simulations and combined to form a damage distribution using the analytic techniques first described in [1]. The calculations show that the high energy recoils from the nuclear inelastic reactions (calculated using the Monte Carlo code MCNPX [2]) produce a pronounced skewing of the damage energy distribution. While the nuclear elastic component (also calculated using the MCNPX) contributes only a small fraction of the total nonionizing damage energy, its inclusion in the shape of the damage across the array is significant. The Coulombic contribution was calculated using MRED [3-5], a Geant4 [4,6] application. The comparison with the dark current distribution strongly suggests that mechanisms which are not linearly correlated with nonionizing damage produced according to collision kinematics are responsible for the observed dark current increases. This has important implications for the process of predicting the on-orbit dark current response of the HgCdTe sensor array.
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
Potentials of mean force for biomolecular simulations: Theory and test on alanine dipeptide
NASA Astrophysics Data System (ADS)
Pellegrini, Matteo; Grønbech-Jensen, Niels; Doniach, Sebastian
1996-06-01
We describe a technique for generating potentials of mean force (PMF) between solutes in an aqueous solution. We first generate solute-solvent correlation functions (CF) using Monte Carlo (MC) simulations in which we place a single atom solute in a periodic boundary box containing a few hundred water molecules. We then make use of the Kirkwood superposition approximation, where the 3-body correlation function is approximated as the product of 2-body CFs, to describe the mean water density around two solutes. Computing the force generated on the solutes by this average water density allows us to compute potentials of mean force between the two solutes. For charged solutes an additional approximation involving dielectric screening is made, by setting the dielectric constant of water to ɛ=80. These potentials account, in an approximate manner, for the average effect of water on the atoms. Following the work of Pettitt and Karplus [Chem. Phys. Lett. 121, 194 (1985)], we approximate the n-body potential of mean force as a sum of the pairwise potentials of mean force. This allows us to run simulations of biomolecules without introducing explicit water, hence gaining several orders of magnitude in efficiency with respect to standard molecular dynamics techniques. We demonstrate the validity of this technique by first comparing the PMFs for methane-methane and sodium-chloride generated with this procedure, with those calculated with a standard Monte Carlo simulation with explicit water. We then compare the results of the free energy profiles between the equilibria of alanine dipeptide generated by the two methods.
NASA Astrophysics Data System (ADS)
Wallace, Jon Michael
2003-10-01
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Paganetti, H; Jiang, H; Lee, S Y; Kooy, H M
2004-07-01
Monte Carlo dosimetry calculations are essential methods in radiation therapy. To take full advantage of this tool, the beam delivery system has to be simulated in detail and the initial beam parameters have to be known accurately. The modeling of the beam delivery system itself opens various areas where Monte Carlo calculations prove extremely helpful, such as for design and commissioning of a therapy facility as well as for quality assurance verification. The gantry treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH) were modeled in detail using the GEANT4.5.2 Monte Carlo code. For this purpose, various novel solutions for simulating irregular shaped objects in the beam path, like contoured scatterers, patient apertures or patient compensators, were found. The four-dimensional, in time and space, simulation of moving parts, such as the modulator wheel, was implemented. Further, the appropriate physics models and cross sections for proton therapy applications were defined. We present comparisons between measured data and simulations. These show that by modeling the treatment nozzle with millimeter accuracy, it is possible to reproduce measured dose distributions with an accuracy in range and modulation width, in the case of a spread-out Bragg peak (SOBP), of better than 1 mm. The excellent agreement demonstrates that the simulations can even be used to generate beam data for commissioning treatment planning systems. The Monte Carlo nozzle model was used to study mechanical optimization in terms of scattered radiation and secondary radiation in the design of the nozzles. We present simulations on the neutron background. Further, the Monte Carlo calculations supported commissioning efforts in understanding the sensitivity of beam characteristics and how these influence the dose delivered. We present the sensitivity of dose distributions in water with respect to various beam parameters and geometrical misalignments. This allows the definition of tolerances for quality assurance and the design of quality assurance procedures.
NASA Astrophysics Data System (ADS)
Holmes, Jesse Curtis
Nuclear data libraries provide fundamental reaction information required by nuclear system simulation codes. The inclusion of data covariances in these libraries allows the user to assess uncertainties in system response parameters as a function of uncertainties in the nuclear data. Formats and procedures are currently established for representing covariances for various types of reaction data in ENDF libraries. This covariance data is typically generated utilizing experimental measurements and empirical models, consistent with the method of parent data production. However, ENDF File 7 thermal neutron scattering library data is, by convention, produced theoretically through fundamental scattering physics model calculations. Currently, there is no published covariance data for ENDF File 7 thermal libraries. Furthermore, no accepted methodology exists for quantifying or representing uncertainty information associated with this thermal library data. The quality of thermal neutron inelastic scattering cross section data can be of high importance in reactor analysis and criticality safety applications. These cross sections depend on the material's structure and dynamics. The double-differential scattering law, S(alpha, beta), tabulated in ENDF File 7 libraries contains this information. For crystalline solids, S(alpha, beta) is primarily a function of the material's phonon density of states (DOS). Published ENDF File 7 libraries are commonly produced by calculation and processing codes, such as the LEAPR module of NJOY, which utilize the phonon DOS as the fundamental input for inelastic scattering calculations to directly output an S(alpha, beta) matrix. To determine covariances for the S(alpha, beta) data generated by this process, information about uncertainties in the DOS is required. The phonon DOS may be viewed as a probability density function of atomic vibrational energy states that exist in a material. Probable variation in the shape of this spectrum may be established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.
Simulated Performance of the Orbiting Wide-angle Light Collectors (OWL) Experiment
NASA Technical Reports Server (NTRS)
Krizmanic, J. F.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The Orbiting Wide-angle Light collectors (OWL) experiment is in NASA's mid-term strategic plan and will stereoscopically image, from equatorial orbit, the air fluorescence signal generated by airshowers induced by the ultrahigh energy (E greater than few x 10(exp 19) eV) component of the cosmic radiation. The use of a space-based platform enables an extremely large event acceptance aperture and thus will allow a high statistics measurement of these rare events. Detailed Monte Carlo simulations are required to quantify the physics potential of the mission as well as optimize the instrumental parameters. This paper reports on the results of the GSFC Monte Carlo simulation for two different, OWL instrument baseline designs. These results indicate that, assuming a continuation of the cosmic ray spectrum (theta approximately E(exp -2.75), OWL could have an event rate of 4000 events/year with E greater than or equal to 10(exp 20) eV. Preliminary results, based upon these Monte Carlo simulations, indicate that events can be accurately reconstructed in the detector focal plane arrays for the OWL instrument baseline designs under consideration.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.
ERIC Educational Resources Information Center
Thompson, Bruce; Fan, Xitao
This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…
ERIC Educational Resources Information Center
Fan, Xitao
This paper empirically and systematically assessed the performance of bootstrap resampling procedure as it was applied to a regression model. Parameter estimates from Monte Carlo experiments (repeated sampling from population) and bootstrap experiments (repeated resampling from one original bootstrap sample) were generated and compared. Sample…
Interactive Web-Based Pointillist Visualization of Hydrogenic Orbitals Using Jmol
ERIC Educational Resources Information Center
Tully, Shane P.; Stitt, Thomas M.; Caldwell, Robert D.; Hardock, Brian J.; Hanson, Robert M.; Maslak, Przemyslaw
2013-01-01
A Monte Carlo method is used to generate interactive pointillist displays of electron density in hydrogenic orbitals. The Web applet incorporating Jmol viewer allows for clear and accurate presentation of three-dimensional shapes and sizes of orbitals up to "n" = 5, where "n" is the principle quantum number. The obtained radial…
Examining Factor Score Distributions to Determine the Nature of Latent Spaces
ERIC Educational Resources Information Center
Steinley, Douglas; McDonald, Roderick P.
2007-01-01
Similarities between latent class models with K classes and linear factor models with K-1 factors are investigated. Specifically, the mathematical equivalence between the covariance structure of the two models is discussed, and a Monte Carlo simulation is performed using generated data that represents both latent factors and latent classes with…
NASA Astrophysics Data System (ADS)
Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis
2017-01-01
A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.
NASA Technical Reports Server (NTRS)
Wilson, Thomas L.; Pinsky, Lawrence; Andersen, Victor; Empl, Anton; Lee, Kerry; Smirmov, Georgi; Zapp, Neal; Ferrari, Alfredo; Tsoulou, Katerina; Roesler, Stefan;
2005-01-01
Simulating the Space Radiation environment with Monte Carlo Codes, such as FLUKA, requires the ability to model the interactions of heavy ions as they penetrate spacecraft and crew member's bodies. Monte-Carlo-type transport codes use total interaction cross sections to determine probabilistically when a particular type of interaction has occurred. Then, at that point, a distinct event generator is employed to determine separately the results of that interaction. The space radiation environment contains a full spectrum of radiation types, including relativistic nuclei, which are the most important component for the evaluation of crew doses. Interactions between incident protons with target nuclei in the spacecraft materials and crew member's bodies are well understood. However, the situation is substantially less comfortable for incident heavier nuclei (heavy ions). We have been engaged in developing several related heavy ion interaction models based on a Quantum Molecular Dynamics-type approach for energies up through about 5 GeV per nucleon (GeV/A) as part of a NASA Consortium that includes a parallel program of cross section measurements to guide and verify this code development.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
NASA Astrophysics Data System (ADS)
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
Simulation of metal additive manufacturing microstructures using kinetic Monte Carlo
Rodgers, Theron M.; Madison, Jonathan D.; Tikare, Veena
2017-04-19
Additive manufacturing (AM) is of tremendous interest given its ability to realize complex, non-traditional geometries in engineered structural materials. But, microstructures generated from AM processes can be equally, if not more, complex than their conventionally processed counterparts. While some microstructural features observed in AM may also occur in more traditional solidification processes, the introduction of spatially and temporally mobile heat sources can result in significant microstructural heterogeneity. While grain size and shape in metal AM structures are understood to be highly dependent on both local and global temperature profiles, the exact form of this relation is not well understood. Wemore » implement an idealized molten zone and temperature-dependent grain boundary mobility in a kinetic Monte Carlo model to predict three-dimensional grain structure in additively manufactured metals. In order to demonstrate the flexibility of the model, synthetic microstructures are generated under conditions mimicking relatively diverse experimental results present in the literature. Simulated microstructures are then qualitatively and quantitatively compared to their experimental complements and are shown to be in good agreement.« less
Risk assessment predictions of open dumping area after closure using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Pauzi, Nur Irfah Mohd; Radhi, Mohd Shahril Mat; Omar, Husaini
2017-10-01
Currently, there are many abandoned open dumping areas that were left without any proper mitigation measures. These open dumping areas could pose serious hazard to human and pollute the environment. The objective of this paper is to determine the risk assessment at the open dumping area after they has been closed using Monte Carlo Simulation method. The risk assessment exercise is conducted at the Kuala Lumpur dumping area. The rapid urbanisation of Kuala Lumpur coupled with increase in population lead to increase in waste generation. It leads to more dumping/landfill area in Kuala Lumpur. The first stage of this study involve the assessment of the dumping area and samples collections. It followed by measurement of settlement of dumping area using oedometer. The risk of the settlement is predicted using Monte Carlo simulation method. Monte Carlo simulation calculates the risk and the long-term settlement. The model simulation result shows that risk level of the Kuala Lumpur open dumping area ranges between Level III to Level IV i.e. between medium risk to high risk. These settlement (ΔH) is between 3 meters to 7 meters. Since the risk is between medium to high, it requires mitigation measures such as replacing the top waste soil with new sandy gravel soil. This will increase the strength of the soil and reduce the settlement.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Electromagnetic and neutral-weak response functions of 4He and 12C
NASA Astrophysics Data System (ADS)
Lovato, A.; Gandolfi, S.; Carlson, J.; Pieper, Steven C.; Schiavilla, R.
2015-06-01
Background: A major goal of nuclear theory is to understand the strong interaction in nuclei as it manifests itself in terms of two- and many-body forces among the nuclear constituents, the protons and neutrons, and the interactions of these constituents with external electroweak probes via one- and many-body currents. Purpose: The objective of the present work is to calculate the quasielastic electroweak response functions in light nuclei within the realistic dynamical framework outlined above. These response functions determine the inclusive cross section as function of the lepton momentum and energy transfers. Methods: Their ab initio calculation is a very challenging quantum many-body problem, since it requires summation over the entire excitation spectrum of the nucleus and inclusion in the electroweak currents of one- and many-body terms. Green's functions Monte Carlo methods allow one to circumvent both difficulties by computing the response in imaginary time (the so-called Euclidean response) and hence summing implicitly over the bound and continuum states of the nucleus, and by implementing specific algorithms designed to deal with the complicated spin-isospin structure of nuclear many-body operators. Results: Theoretical predictions for 4He and 12C, confirmed by experiment in the electromagnetic case, show that two-body currents generate excess transverse strength from threshold to the quasielastic to the dip region and beyond. Conclusions: These results challenge the conventional picture of quasielastic inclusive scattering as being largely dominated by single-nucleon knockout processes.
Electromagnetic and neutral-weak response functions of 4He and 12C
Lovato, A.; Gandolfi, Stefano; Carlson, Joseph Allen; ...
2015-06-04
A major goal of nuclear theory is to understand the strong interaction in nuclei as it manifests itself in terms of two- and many-body forces among the nuclear constituents, the protons and neutrons, and the interactions of these constituents with external electroweak probes via one- and many-body currents. The objective of the present work is to calculate the quasielastic electroweak response functions in light nuclei within the realistic dynamical framework outlined above. These response functions determine the inclusive cross section as function of the lepton momentum and energy transfers. Their ab initio calculation is a very challenging quantum many-body problem,more » since it requires summation over the entire excitation spectrum of the nucleus and inclusion in the electroweak currents of one- and many-body terms. Green's functions Monte Carlo methods allow one to circumvent both difficulties by computing the response in imaginary time (the so-called Euclidean response) and hence summing implicitly over the bound and continuum states of the nucleus, and by implementing specific algorithms designed to deal with the complicated spin-isospin structure of nuclear many-body operators. Theoretical predictions for 4He and 12C, confirmed by experiment in the electromagnetic case, show that two-body currents generate excess transverse strength from threshold to the quasielastic to the dip region and beyond. In conclusion, these results challenge the conventional picture of quasielastic inclusive scattering as being largely dominated by single-nucleon knockout processes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawara, R.; Ishikawa, M., E-mail: masayori@med.hokudai.ac.jp
The anode pulse of a photomultiplier tube (PMT) coupled with a scintillator is used for pulse shape discrimination (PSD) analysis. We have developed a novel emulation technique for the PMT anode pulse based on optical photon transport and a PMT response function. The photon transport was calculated using Geant4 Monte Carlo code and the response function with a BC408 organic scintillator. The obtained percentage RMS value of the difference between the measured and simulated pulse with suitable scintillation properties using GSO:Ce (0.4, 1.0, 1.5 mol%), LaBr{sub 3}:Ce and BGO scintillators were 2.41%, 2.58%, 2.16%, 2.01%, and 3.32%, respectively. The proposedmore » technique demonstrates high reproducibility of the measured pulse and can be applied to simulation studies of various radiation measurements.« less
Statistical complexity measure of pseudorandom bit generators
NASA Astrophysics Data System (ADS)
González, C. M.; Larrondo, H. A.; Rosso, O. A.
2005-08-01
Pseudorandom number generators (PRNG) are extensively used in Monte Carlo simulations, gambling machines and cryptography as substitutes of ideal random number generators (RNG). Each application imposes different statistical requirements to PRNGs. As L’Ecuyer clearly states “the main goal for Monte Carlo methods is to reproduce the statistical properties on which these methods are based whereas for gambling machines and cryptology, observing the sequence of output values for some time should provide no practical advantage for predicting the forthcoming numbers better than by just guessing at random”. In accordance with different applications several statistical test suites have been developed to analyze the sequences generated by PRNGs. In a recent paper a new statistical complexity measure [Phys. Lett. A 311 (2003) 126] has been defined. Here we propose this measure, as a randomness quantifier of a PRNGs. The test is applied to three very well known and widely tested PRNGs available in the literature. All of them are based on mathematical algorithms. Another PRNGs based on Lorenz 3D chaotic dynamical system is also analyzed. PRNGs based on chaos may be considered as a model for physical noise sources and important new results are recently reported. All the design steps of this PRNG are described, and each stage increase the PRNG randomness using different strategies. It is shown that the MPR statistical complexity measure is capable to quantify this randomness improvement. The PRNG based on the chaotic 3D Lorenz dynamical system is also evaluated using traditional digital signal processing tools for comparison.
Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications
NASA Astrophysics Data System (ADS)
Li, Fusheng
Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder platform and has a user-friendly interface to accomplish all qualitative and quantitative tasks easily. That is to say, the software enables users to run the forward Monte Carlo simulation (if necessary) or use previously calculated Monte Carlo library spectra to obtain the sample elemental composition estimation within a minute. The GUI software is easy to use with user-friendly features and has the capability to accomplish all related tasks in a visualization environment. It can be a powerful tool for EDXRF analysts. A reproducible experiment setup has been built and experiments have been performed to benchmark the system. Two types of Standard Reference Materials (SRM), stainless steel samples from National Institute of Standards and Technology (NIST) and aluminum alloy samples from Alcoa Inc., with certified elemental compositions, are tested with this reproducible prototype system using a 109Cd radioisotope source (20mCi) and a liquid nitrogen cooled Si(Li) detector. The results show excellent agreement between the calculated sample compositions and their reference values and the approach is very fast.
Robert J. Luxmoore; William W. Hargrove; M. Lynn Tharp; Wilfred M. Post; Michael W. Berry; Karen S. Minser; Wendell P. Cropper; Dale W. Johnson; Boris Zeide; Ralph L. Amateis; Harold E. Burkhart; V. Clark Baldwin; Kelly D. Peterson
2000-01-01
Stochastic transfer of information in a hierarchy of simulators is offered as a conceptual approach for assessing forest responses to changing climate and air quality across 13 southeastern states of the USA. This assessment approach combines geographic information system and Monte Carlo capabilities with several scales of computer modeling for southern pine species...
Parallel CARLOS-3D code development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putnam, J.M.; Kotulski, J.D.
1996-02-01
CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions tomore » the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.« less
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
Dose response evaluation of a low-density normoxic polymer gel dosimeter using MRI
NASA Astrophysics Data System (ADS)
Haraldsson, P.; Karlsson, A.; Wieslander, E.; Gustavsson, H.; Bäck, S. Å. J.
2006-02-01
A low-density (~0.6 g cm-3) normoxic polymer gel, containing the antioxidant tetrakis (hydroxymethyl) phosponium (THP), has been investigated with respect to basic absorbed dose response characteristics. The low density was obtained by mixing the gel with expanded polystyrene spheres. The depth dose data for 6 and 18 MV photons were compared with Monte Carlo calculations. A large volume phantom was irradiated in order to study the 3D dose distribution from a 6 MV field. Evaluation of the gel was carried out using magnetic resonance imaging. An approximately linear response was obtained for 1/T2 versus dose in the dose range of 2 to 8 Gy. A small decrease in the dose response was observed for increasing concentrations of THP. A good agreement between measured and Monte Carlo calculated data was obained, both for test tubes and the larger 3D phantom. It was shown that a normoxic polymer gel with a reduced density could be obtained by adding expanded polystyrene spheres. In order to get reliable results, it is very important to have a uniform distribution of the gel and expanded polystyrene spheres in the phantom volume.
Dose response of alanine detectors irradiated with carbon ion beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, Rochus; Jaekel, Oliver; Palmans, Hugo
Purpose: The dose response of the alanine detector shows a dependence on particle energy and type when irradiated with ion beams. The purpose of this study is to investigate the response behavior of the alanine detector in clinical carbon ion beams and compare the results to model predictions. Methods: Alanine detectors have been irradiated with carbon ions with an energy range of 89-400 MeV/u. The relative effectiveness of alanine has been measured in this regime. Pristine and spread out Bragg peak depth-dose curves have been measured with alanine dosimeters. The track structure based alanine response model developed by Hansen andmore » Olsen has been implemented in the Monte Carlo code FLUKA and calculations were compared to experimental results. Results: Calculations of the relative effectiveness deviate less than 5% from the measured values for monoenergetic beams. Measured depth-dose curves deviate from predictions in the peak region, most pronounced at the distal edge of the peak. Conclusions: The used model and its implementation show a good overall agreement for quasimonoenergetic measurements. Deviations in depth-dose measurements are mainly attributed to uncertainties of the detector geometry implemented in the Monte Carlo simulations.« less
Bayoumi, T A; Reda, S M; Saleh, H M
2012-01-01
Radioactive waste generated from the nuclear applications should be properly isolated by a suitable containment system such as, multi-barrier container. The present study aims to evaluate the isolation capacity of a new multi-barrier container made from cement and clay and including borate waste materials. These wastes were spiked by (137)Cs and (60)Co radionuclides to simulate that waste generated from the primary cooling circuit of pressurized water reactors. Leaching of both radionuclides in ground water was followed and calculated during ten years. Monte Carlo (MCNP5) simulations computed the photon flux distribution of the multi-barrier container, including radioactive borate waste of specific activity 11.22KBq/g and 4.18KBq/g for (137)Cs and (60)Co, respectively, at different periods of 0, 15.1, 30.2 and 302 years. The average total flux for 100cm radius of spherical cell was 0.192photon/cm(2) at initial time and 2.73×10(-4)photon/cm(2) after 302 years. Maximum waste activity keeping the surface radiation dose within the permissible level was calculated and found to be 56KBq/g with attenuation factors of 0.73cm(-1) and 0.6cm(-1) for cement and clay, respectively. The average total flux was 1.37×10(-3)photon/cm(2) after 302 years. Monte Carlo simulations revealed that the proposed multi-barrier container is safe enough during transportation, evacuation or rearrangement in the disposal site for more than 300 years. Copyright © 2011 Elsevier Ltd. All rights reserved.