Sample records for carlo sensitivity analysis

  1. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  2. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  3. Hamiltonian Markov Chain Monte Carlo Methods for the CUORE Neutrinoless Double Beta Decay Sensitivity

    NASA Astrophysics Data System (ADS)

    Graham, Eleanor; Cuore Collaboration

    2017-09-01

    The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.

  4. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  5. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  6. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  7. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  8. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  9. MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Meyer, Arnd

    2010-02-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  10. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Arnd

    2010-02-10

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  11. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratama, Cecep, E-mail: great.pratama@gmail.com; Meilano, Irwan; Nugraha, Andri Dian

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate formore » Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.« less

  12. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis

    USDA-ARS?s Scientific Manuscript database

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...

  13. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less

  14. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  15. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  16. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    DTIC Science & Technology

    2015-03-16

    sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates

  17. Monte Carlo sensitivity analysis of unknown parameters in hazardous materials transportation risk assessment.

    PubMed

    Pet-Armacost, J J; Sepulveda, J; Sakude, M

    1999-12-01

    The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.

  18. Radiative transfer modelling inside thermal protection system using hybrid homogenization method for a backward Monte Carlo method coupled with Mie theory

    NASA Astrophysics Data System (ADS)

    Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.

    2012-06-01

    A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.

  19. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less

  20. Monte Carlo sensitivity analysis of land surface parameters using the Variable Infiltration Capacity model

    NASA Astrophysics Data System (ADS)

    Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten

    2007-06-01

    Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.

  1. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  2. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop + Pesticide model. Simulations are performed of hive population trajectories with and without pesti...

  3. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  4. UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E

    EPA Science Inventory

    A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...

  5. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    USDA-ARS?s Scientific Manuscript database

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop+Pesticide model. Simulations are performed of hive population trajectories with and without pesticide exposure to determine the eff...

  6. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, David; Hershey, Ronald L.

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less

  7. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  8. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  9. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  10. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    ERIC Educational Resources Information Center

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  11. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  12. The sensitivity of EGRET to gamma ray polarization

    NASA Astrophysics Data System (ADS)

    Mattox, John R.

    1990-05-01

    A Monte Carlo simulation shows that EGRET (Energetic Gamma-Ray Experimental Telescope) does not even have sufficient sensitivity to detect 100 percent polarized gamma-rays. This is confirmed by analysis of calibration data. A Monte Carlo study shows that the sensitivity of EGRET to polarization peaks around 100 MeV. However, more than 10 5 gamma-ray events with 100 percent polarization would be required for a 3 sigma significance detection - more than available from calibration, and probably more than will result from a single score source during flight. A drift chamber gamma ray telescope under development (Hunter and Cuddapah 1989) will offer better sensitivity to polarization. The lateral position uncertainty will be improved by an order of magnitude. Also, if pair production occurs in the drift chamber gas (xenon at 2 bar) instead of tantalum foils, the effects of multiple Coulomb scattering will be reduced.

  13. A Monte Carlo Sensitivity Analysis of CF2 and CF Radical Densities in a c-C4F8 Plasma

    NASA Technical Reports Server (NTRS)

    Bose, Deepak; Rauf, Shahid; Hash, D. B.; Govindan, T. R.; Meyyappan, M.

    2004-01-01

    A Monte Carlo sensitivity analysis is used to build a plasma chemistry model for octacyclofluorobutane (c-C4F8) which is commonly used in dielectric etch. Experimental data are used both quantitatively and quantitatively to analyze the gas phase and gas surface reactions for neutral radical chemistry. The sensitivity data of the resulting model identifies a few critical gas phase and surface aided reactions that account for most of the uncertainty in the CF2 and CF radical densities. Electron impact dissociation of small radicals (CF2 and CF) and their surface recombination reactions are found to be the rate-limiting steps in the neutral radical chemistry. The relative rates for these electron impact dissociation and surface recombination reactions are also suggested. The resulting mechanism is able to explain the measurements of CF2 and CF densities available in the literature and also their hollow spatial density profiles.

  14. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations.

    PubMed

    Kamran, Faisal; Andersen, Peter E

    2015-08-10

    Oblique incidence reflectometry has developed into an effective, noncontact, and noninvasive measurement technology for the quantification of both the reduced scattering and absorption coefficients of a sample. The optical properties are deduced by analyzing only the shape of the reflectance profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical properties in which system demands vary to be able to detect subtle changes in the structure of the medium, translated as measured optical properties. Effects of variation in anisotropy are discussed and results presented. Finally, experimental data of milk products with different fat content are considered as examples for comparison.

  15. Probabilistic structural analysis using a general purpose finite element program

    NASA Astrophysics Data System (ADS)

    Riha, D. S.; Millwater, H. R.; Thacker, B. H.

    1992-07-01

    This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.

  16. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less

  17. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE PAGES

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-31

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  18. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis.

    PubMed

    Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian

    2017-01-28

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  19. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  20. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  1. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  2. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    PubMed

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Monte Carlo Modeling-Based Digital Loop-Mediated Isothermal Amplification on a Spiral Chip for Absolute Quantification of Nucleic Acids.

    PubMed

    Xia, Yun; Yan, Shuangqian; Zhang, Xian; Ma, Peng; Du, Wei; Feng, Xiaojun; Liu, Bi-Feng

    2017-03-21

    Digital loop-mediated isothermal amplification (dLAMP) is an attractive approach for absolute quantification of nucleic acids with high sensitivity and selectivity. Theoretical and numerical analysis of dLAMP provides necessary guidance for the design and analysis of dLAMP devices. In this work, a mathematical model was proposed on the basis of the Monte Carlo method and the theories of Poisson statistics and chemometrics. To examine the established model, we fabricated a spiral chip with 1200 uniform and discrete reaction chambers (9.6 nL) for absolute quantification of pathogenic DNA samples by dLAMP. Under the optimized conditions, dLAMP analysis on the spiral chip realized quantification of nucleic acids spanning over 4 orders of magnitude in concentration with sensitivity as low as 8.7 × 10 -2 copies/μL in 40 min. The experimental results were consistent with the proposed mathematical model, which could provide useful guideline for future development of dLAMP devices.

  4. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream.

    PubMed

    Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.

  5. Sensitivity analysis in practice: providing an uncertainty budget when applying supplement 1 to the GUM

    NASA Astrophysics Data System (ADS)

    Allard, Alexandre; Fischer, Nicolas

    2018-06-01

    Sensitivity analysis associated with the evaluation of measurement uncertainty is a very important tool for the metrologist, enabling them to provide an uncertainty budget and to gain a better understanding of the measurand and the underlying measurement process. Using the GUM uncertainty framework, the contribution of an input quantity to the variance of the output quantity is obtained through so-called ‘sensitivity coefficients’. In contrast, such coefficients are no longer computed in cases where a Monte-Carlo method is used. In such a case, supplement 1 to the GUM suggests varying the input quantities one at a time, which is not an efficient method and may provide incorrect contributions to the variance in cases where significant interactions arise. This paper proposes different methods for the elaboration of the uncertainty budget associated with a Monte Carlo method. An application to the mass calibration example described in supplement 1 to the GUM is performed with the corresponding R code for implementation. Finally, guidance is given for choosing a method, including suggestions for a future revision of supplement 1 to the GUM.

  6. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  7. MUSiC - A general search for deviations from monte carlo predictions in CMS

    NASA Astrophysics Data System (ADS)

    Biallass, Philipp A.; CMS Collaboration

    2009-06-01

    A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.

  8. MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS

    NASA Astrophysics Data System (ADS)

    Hof, Carsten

    2009-05-01

    We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.

  9. Sensitivity analyses of stopping distance for connected vehicles at active highway-rail grade crossings.

    PubMed

    Hsu, Chung-Jen; Jones, Elizabeth G

    2017-02-01

    This paper performs sensitivity analyses of stopping distance for connected vehicles (CVs) at active highway-rail grade crossings (HRGCs). Stopping distance is the major safety factor at active HRGCs. A sensitivity analysis is performed for each variable in the function of stopping distance. The formulation of stopping distance treats each variable as a probability density function for implementing Monte Carlo simulations. The result of the sensitivity analysis shows that the initial speed is the most sensitive factor to stopping distances of CVs and non-CVs. The safety of CVs can be further improved by the early provision of onboard train information and warnings to reduce the initial speeds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.

    PubMed

    Munir, Mohammad

    2018-06-01

    Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  12. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    DOE PAGES

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...

    2016-08-20

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less

  13. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  14. Statistical methods for launch vehicle guidance, navigation, and control (GN&C) system design and analysis

    NASA Astrophysics Data System (ADS)

    Rose, Michael Benjamin

    A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.

  15. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  16. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  17. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  18. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  19. UNCERTAINTY AND SENSITIVITY ANALYSIS OF RUNOFF AND SEDIMENT YIELD IN A SMALL AGRICULTURAL WATERSHED WITH KINEROS2

    EPA Science Inventory

    Using the Monte Carlo (MC) method, this paper derives arithmetic and geometric means and associated variances of the net capillary drive parameter, G, that appears in the Parlange infiltration model, as a function of soil texture and antecedent soil moisture content. App...

  20. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  1. keV-Scale sterile neutrino sensitivity estimation with time-of-flight spectroscopy in KATRIN using self-consistent approximate Monte Carlo

    NASA Astrophysics Data System (ADS)

    Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian

    2018-03-01

    We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.

  2. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    NASA Astrophysics Data System (ADS)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  3. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Yu, Maolin; Du, R.

    2005-08-01

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.

  4. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  5. New infrastructure for studies of transmutation and fast systems concepts

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-09-01

    In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  6. A low power ADS for transmutation studies in fast systems

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-12-01

    In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  7. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1).

    PubMed

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2009-01-01

    The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.

  8. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  9. Calibration of the Top-Quark Monte Carlo Mass.

    PubMed

    Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf

    2016-04-22

    We present a method to establish, experimentally, the relation between the top-quark mass m_{t}^{MC} as implemented in Monte Carlo generators and the Lagrangian mass parameter m_{t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m_{t}^{MC} and an observable sensitive to m_{t}, which does not rely on any prior assumptions about the relation between m_{t} and m_{t}^{MC}. The measured observable is independent of m_{t}^{MC} and can be used subsequently for a determination of m_{t}. The analysis strategy is illustrated with examples for the extraction of m_{t} from inclusive and differential cross sections for hadroproduction of top quarks.

  10. Societal costs in displaced transverse olecranon fractures: using decision analysis tools to find the most cost-effective strategy between tension band wiring and locked plating.

    PubMed

    Francis, Tittu; Washington, Travis; Srivastava, Karan; Moutzouros, Vasilios; Makhni, Eric C; Hakeos, William

    2017-11-01

    Tension band wiring (TBW) and locked plating are common treatment options for Mayo IIA olecranon fractures. Clinical trials have shown excellent functional outcomes with both techniques. Although TBW implants are significantly less expensive than a locked olecranon plate, TBW often requires an additional operation for implant removal. To choose the most cost-effective treatment strategy, surgeons must understand how implant costs and return to the operating room influence the most cost-effective strategy. This cost-effective analysis study explored the optimal treatment strategies by using decision analysis tools. An expected-value decision tree was constructed to estimate costs based on the 2 implant choices. Values for critical variables, such as implant removal rate, were obtained from the literature. A Monte Carlo simulation consisting of 100,000 trials was used to incorporate variability in medical costs and implant removal rates. Sensitivity analysis and strategy tables were used to show how different variables influence the most cost-effective strategy. TBW was the most cost-effective strategy, with a cost savings of approximately $1300. TBW was also the dominant strategy by being the most cost-effective solution in 63% of the Monte Carlo trials. Sensitivity analysis identified implant costs for plate fixation and surgical costs for implant removal as the most sensitive parameters influencing the cost-effective strategy. Strategy tables showed the most cost-effective solution as 2 parameters vary simultaneously. TBW is the most cost-effective strategy in treating Mayo IIA olecranon fractures despite a higher rate of return to the operating room. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  11. Performance evaluation for pinhole collimators of small gamma camera by MTF and NNPS analysis: Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Jeon, Hosang; Kim, Hyunduk; Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Chung, Yong Hyun; Yun, Jong-Il

    2009-06-01

    Presently, the gamma camera system is widely used in various medical diagnostic, industrial and environmental fields. Hence, the quantitative and effective evaluation of its imaging performance is essential for design and quality assurance. The National Electrical Manufacturers Association (NEMA) standards for gamma camera evaluation are insufficient to perform sensitive evaluation. In this study, modulation transfer function (MTF) and normalized noise power spectrum (NNPS) will be suggested to evaluate the performance of small gamma camera with changeable pinhole collimators using Monte Carlo simulation. We simulated the system with a cylinder and a disk source, and seven different pinhole collimators from 1- to 4-mm-diameter pinhole with lead. The MTF and NNPS data were obtained from output images and were compared with full-width at half-maximum (FWHM), sensitivity and differential uniformity. In the result, we found that MTF and NNPS are effective and novel standards to evaluate imaging performance of gamma cameras instead of conventional NEMA standards.

  12. SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dormody, M.; Johnson, R. P.; Atwood, W. B.

    2011-12-01

    We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less

  13. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  14. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger

    2008-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  15. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael

    2007-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  16. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  17. Cost-effectiveness of digital subtraction angiography in the setting of computed tomographic angiography negative subarachnoid hemorrhage.

    PubMed

    Jethwa, Pinakin R; Punia, Vineet; Patel, Tapan D; Duffis, E Jesus; Gandhi, Chirag D; Prestigiacomo, Charles J

    2013-04-01

    Recent studies have documented the high sensitivity of computed tomography angiography (CTA) in detecting a ruptured aneurysm in the presence of acute subarachnoid hemorrhage (SAH). The practice of digital subtraction angiography (DSA) when CTA does not reveal an aneurysm has thus been called into question. We examined this dilemma from a cost-effectiveness perspective by using current decision analysis techniques. A decision tree was created with the use of TreeAge Pro Suite 2012; in 1 arm, a CTA-negative SAH was followed up with DSA; in the other arm, patients were observed without further imaging. Based on literature review, costs and utilities were assigned to each potential outcome. Base-case and sensitivity analyses were performed to determine the cost-effectiveness of each strategy. A Monte Carlo simulation was then conducted by sampling each variable over a plausible distribution to evaluate the robustness of the model. With the use of a negative predictive value of 95.7% for CTA, observation was found to be the most cost-effective strategy ($6737/Quality Adjusted Life Year [QALY] vs $8460/QALY) in the base-case analysis. One-way sensitivity analysis demonstrated that DSA became the more cost-effective option if the negative predictive value of CTA fell below 93.72%. The Monte Carlo simulation produced an incremental cost-effectiveness ratio of $83 083/QALY. At the conventional willingness-to-pay threshold of $50 000/QALY, observation was the more cost-effective strategy in 83.6% of simulations. The decision to perform a DSA in CTA-negative SAH depends strongly on the sensitivity of CTA, and therefore must be evaluated at each center treating these types of patients. Given the high sensitivity of CTA reported in the current literature, performing DSA on all patients with CTA negative SAH may not be cost-effective at every institution.

  18. Influences of geological parameters to probabilistic assessment of slope stability of embankment

    NASA Astrophysics Data System (ADS)

    Nguyen, Qui T.; Le, Tuan D.; Konečný, Petr

    2018-04-01

    This article considers influences of geological parameters to slope stability of the embankment in probabilistic analysis using SLOPE/W computational system. Stability of a simple slope is evaluated with and without pore–water pressure on the basis of variation of soil properties. Normal distributions of unit weight, cohesion and internal friction angle are assumed. Monte Carlo simulation technique is employed to perform analysis of critical slip surface. Sensitivity analysis is performed to observe the variation of the geological parameters and their effects on safety factors of the slope stability.

  19. MAGIC-f Gel in Nuclear Medicine Dosimetry: study in an external beam of Iodine-131

    NASA Astrophysics Data System (ADS)

    Schwarcke, M.; Marques, T.; Garrido, C.; Nicolucci, P.; Baffa, O.

    2010-11-01

    MAGIC-f gel applicability in Nuclear Medicine dosimetry was investigated by exposure to a 131I source. Calibration was made to provide known absorbed doses in different positions around the source. The absorbed dose in gel was compared with a Monte Carlo Simulation using PENELOPE code and a thermoluminescent dosimetry (TLD). Using MRI analysis for the gel a R2-dose sensitivity of 0.23 s-1Gy-1was obtained. The agreement between dose-distance curves obtained with Monte Carlo simulation and TLD was better than 97% and for MAGIC-f and TLD was better than 98%. The results show the potential of polymer gel for application in nuclear medicine where three dimensional dose distribution is demanded.

  20. The probability of quantal secretion near a single calcium channel of an active zone.

    PubMed Central

    Bennett, M R; Farnell, L; Gibson, W G

    2000-01-01

    A Monte Carlo analysis has been made of calcium dynamics and quantal secretion at microdomains in which the calcium reaches very high concentrations over distances of <50 nm from a channel and for which calcium dynamics are dominated by diffusion. The kinetics of calcium ions in microdomains due to either the spontaneous or evoked opening of a calcium channel, both of which are stochastic events, are described in the presence of endogenous fixed and mobile buffers. Fluctuations in the number of calcium ions within 50 nm of a channel are considerable, with the standard deviation about half the mean. Within 10 nm of a channel these numbers of ions can give rise to calcium concentrations of the order of 100 microM. The temporal changes in free calcium and calcium bound to different affinity indicators in the volume of an entire varicosity or bouton following the opening of a single channel are also determined. A Monte Carlo analysis is also presented of how the dynamics of calcium ions at active zones, after the arrival of an action potential and the stochastic opening of a calcium channel, determine the probability of exocytosis from docked vesicles near the channel. The synaptic vesicles in active zones are found docked in a complex with their calcium-sensor associated proteins and a voltage-sensitive calcium channel, forming a secretory unit. The probability of quantal secretion from an isolated secretory unit has been determined for different distances of an open calcium channel from the calcium sensor within an individual unit: a threefold decrease in the probability of secretion of a quantum occurs with a doubling of the distance from 25 to 50 nm. The Monte Carlo analysis also shows that the probability of secretion of a quantum is most sensitive to the size of the single-channel current compared with its sensitivity to either the binding rates of the sites on the calcium-sensor protein or to the number of these sites that must bind a calcium ion to trigger exocytosis of a vesicle. PMID:10777721

  1. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  2. Neuraxial blockade for external cephalic version: Cost analysis.

    PubMed

    Yamasato, Kelly; Kaneshiro, Bliss; Salcedo, Jennifer

    2015-07-01

    Neuraxial blockade (epidural or spinal anesthesia/analgesia) with external cephalic version increases the external cephalic version success rate. Hospitals and insurers may affect access to neuraxial blockade for external cephalic version, but the costs to these institutions remain largely unstudied. The objective of this study was to perform a cost analysis of neuraxial blockade use during external cephalic version from hospital and insurance payer perspectives. Secondarily, we estimated the effect of neuraxial blockade on cesarean delivery rates. A decision-analysis model was developed using costs and probabilities occurring prenatally through the delivery hospital admission. Model inputs were derived from the literature, national databases, and local supply costs. Univariate and bivariate sensitivity analyses and Monte Carlo simulations were performed to assess model robustness. Neuraxial blockade was cost saving to both hospitals ($30 per delivery) and insurers ($539 per delivery) using baseline estimates. From both perspectives, however, the model was sensitive to multiple variables. Monte Carlo simulation indicated neuraxial blockade to be more costly in approximately 50% of scenarios. The model demonstrated that routine use of neuraxial blockade during external cephalic version, compared to no neuraxial blockade, prevented 17 cesarean deliveries for every 100 external cephalic versions attempted. Neuraxial blockade is associated with minimal hospital and insurer cost changes in the setting of external cephalic version, while reducing the cesarean delivery rate. © 2015 The Authors. Journal of Obstetrics and Gynaecology Research © 2015 Japan Society of Obstetrics and Gynecology.

  3. Neuraxial blockade for external cephalic version: Cost analysis

    PubMed Central

    Yamasato, Kelly; Kaneshiro, Bliss; Salcedo, Jennifer

    2017-01-01

    Aim Neuraxial blockade (epidural or spinal anesthesia/analgesia) with external cephalic version increases the external cephalic version success rate. Hospitals and insurers may affect access to neuraxial blockade for external cephalic version, but the costs to these institutions remain largely unstudied. The objective of this study was to perform a cost analysis of neuraxial blockade use during external cephalic version from hospital and insurance payer perspectives. Secondarily, we estimated the effect of neuraxial blockade on cesarean delivery rates. Methods A decision–analysis model was developed using costs and probabilities occurring prenatally through the delivery hospital admission. Model inputs were derived from the literature, national databases, and local supply costs. Univariate and bivariate sensitivity analyses and Monte Carlo simulations were performed to assess model robustness. Results Neuraxial blockade was cost saving to both hospitals ($30 per delivery) and insurers ($539 per delivery) using baseline estimates. From both perspectives, however, the model was sensitive to multiple variables. Monte Carlo simulation indicated neuraxial blockade to be more costly in approximately 50% of scenarios. The model demonstrated that routine use of neuraxial blockade during external cephalic version, compared to no neuraxial blockade, prevented 17 cesarean deliveries for every 100 external cephalic versions attempted. Conclusions Neuraxial blockade is associated with minimal hospital and insurer cost changes in the setting of external cephalic version, while reducing the cesarean delivery rate. PMID:25771920

  4. Monte Carlo Analysis of the Battery-Type High Temperature Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Grodzki, Marcin; Darnowski, Piotr; Niewiński, Grzegorz

    2017-12-01

    The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an `early design' variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.

  5. Investigation of uncertainty in CO 2 reservoir models: A sensitivity analysis of relative permeability parameter values

    DOE PAGES

    Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.

    2016-03-22

    Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less

  6. Investigation of uncertainty in CO 2 reservoir models: A sensitivity analysis of relative permeability parameter values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.

    Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less

  7. Dynamic Stability of Uncertain Laminated Beams Under Subtangential Loads

    NASA Technical Reports Server (NTRS)

    Goyal, Vijay K.; Kapania, Rakesh K.; Adelman, Howard (Technical Monitor); Horta, Lucas (Technical Monitor)

    2002-01-01

    Because of the inherent complexity of fiber-reinforced laminated composites, it can be challenging to manufacture composite structures according to their exact design specifications, resulting in unwanted material and geometric uncertainties. In this research, we focus on the deterministic and probabilistic stability analysis of laminated structures subject to subtangential loading, a combination of conservative and nonconservative tangential loads, using the dynamic criterion. Thus a shear-deformable laminated beam element, including warping effects, is derived to study the deterministic and probabilistic response of laminated beams. This twenty-one degrees of freedom element can be used for solving both static and dynamic problems. In the first-order shear deformable model used here we have employed a more accurate method to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration The stability is studied by giving the structure a small disturbance about an equilibrium configuration, and observing if the resulting response remains small. In order to study the dynamic behavior by including uncertainties into the problem, three models were developed: Exact Monte Carlo Simulation, Sensitivity Based Monte Carlo Simulation, and Probabilistic FEA. These methods were integrated into the developed finite element analysis. Also, perturbation and sensitivity analysis have been used to study nonconservative problems, as well as to study the stability analysis, using the dynamic criterion.

  8. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  9. Monte Carlo calculation of the sensitivity of a commercial dose calibrator to gamma and beta radiation.

    PubMed

    Laedermann, Jean-Pascal; Valley, Jean-François; Bulling, Shelley; Bochud, François O

    2004-06-01

    The detection process used in a commercial dose calibrator was modeled using the GEANT 3 Monte Carlo code. Dose calibrator efficiency for gamma and beta emitters, and the response to monoenergetic photons and electrons was calculated. The model shows that beta emitters below 2.5 MeV deposit energy indirectly in the detector through bremsstrahlung produced in the chamber wall or in the source itself. Higher energy beta emitters (E > 2.5 MeV) deposit energy directly in the chamber sensitive volume, and dose calibrator sensitivity increases abruptly for these radionuclides. The Monte Carlo calculations were compared with gamma and beta emitter measurements. The calculations show that the variation in dose calibrator efficiency with measuring conditions (source volume, container diameter, container wall thickness and material, position of the source within the calibrator) is relatively small and can be considered insignificant for routine measurement applications. However, dose calibrator efficiency depends strongly on the inner-wall thickness of the detector.

  10. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  11. Global emission projections of particulate matter (PM): II. Uncertainty analyses of on-road vehicle exhaust emissions

    NASA Astrophysics Data System (ADS)

    Yan, Fang; Winijkul, Ekbordin; Bond, Tami C.; Streets, David G.

    2014-04-01

    Estimates of future emissions are necessary for understanding the future health of the atmosphere, designing national and international strategies for air quality control, and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so, thus it is important to quantify the uncertainty inherent in emission projections. This paper is the second in a series that seeks to establish a more mechanistic understanding of future air pollutant emissions based on changes in technology. The first paper in this series (Yan et al., 2011) described a model that projects emissions based on dynamic changes of vehicle fleet, Speciated Pollutant Emission Wizard-Trend, or SPEW-Trend. In this paper, we explore the underlying uncertainties of global and regional exhaust PM emission projections from on-road vehicles in the coming decades using sensitivity analysis and Monte Carlo simulation. This work examines the emission sensitivities due to uncertainties in retirement rate, timing of emission standards, transition rate of high-emitting vehicles called “superemitters”, and emission factor degradation rate. It is concluded that global emissions are most sensitive to parameters in the retirement rate function. Monte Carlo simulations show that emission uncertainty caused by lack of knowledge about technology composition is comparable to the uncertainty demonstrated by alternative economic scenarios, especially during the period 2010-2030.

  12. Study of the IMRT interplay effect using a 4DCT Monte Carlo dose calculation.

    PubMed

    Jensen, Michael D; Abdellatif, Ady; Chen, Jeff; Wong, Eugene

    2012-04-21

    Respiratory motion may lead to dose errors when treating thoracic and abdominal tumours with radiotherapy. The interplay between complex multileaf collimator patterns and patient respiratory motion could result in unintuitive dose changes. We have developed a treatment reconstruction simulation computer code that accounts for interplay effects by combining multileaf collimator controller log files, respiratory trace log files, 4DCT images and a Monte Carlo dose calculator. Two three-dimensional (3D) IMRT step-and-shoot plans, a concave target and integrated boost were delivered to a 1D rigid motion phantom. Three sets of experiments were performed with 100%, 50% and 25% duty cycle gating. The log files were collected, and five simulation types were performed on each data set: continuous isocentre shift, discrete isocentre shift, 4DCT, 4DCT delivery average and 4DCT plan average. Analysis was performed using 3D gamma analysis with passing criteria of 2%, 2 mm. The simulation framework was able to demonstrate that a single fraction of the integrated boost plan was more sensitive to interplay effects than the concave target. Gating was shown to reduce the interplay effects. We have developed a 4DCT Monte Carlo simulation method that accounts for IMRT interplay effects with respiratory motion by utilizing delivery log files.

  13. EVALUATING THE SENSITIVITY OF RADIONUCLIDE DETECTORS FOR CONDUCTING A MARITIME ON-BOARD SEARCH USING MONTE CARLO SIMULATION IMPLEMENTED IN AVERT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S; Dave Dunn, D

    The sensitivity of two specific types of radionuclide detectors for conducting an on-board search in the maritime environment was evaluated using Monte Carlo simulation implemented in AVERT{reg_sign}. AVERT{reg_sign}, short for the Automated Vulnerability Evaluation for Risk of Terrorism, is personal computer based vulnerability assessment software developed by the ARES Corporation. The sensitivity of two specific types of radionuclide detectors for conducting an on-board search in the maritime environment was evaluated using Monte Carlo simulation. The detectors, a RadPack and also a Personal Radiation Detector (PRD), were chosen from the class of Human Portable Radiation Detection Systems (HPRDS). Human Portable Radiationmore » Detection Systems (HPRDS) serve multiple purposes. In the maritime environment, there is a need to detect, localize, characterize, and identify radiological/nuclear (RN) material or weapons. The RadPack is a commercially available broad-area search device used for gamma and also for neutron detection. The PRD is chiefly used as a personal radiation protection device. It is also used to detect contraband radionuclides and to localize radionuclide sources. Neither device has the capacity to characterize or identify radionuclides. The principal aim of this study was to investigate the sensitivity of both the RadPack and the PRD while being used under controlled conditions in a simulated maritime environment for detecting hidden RN contraband. The detection distance varies by the source strength and the shielding present. The characterization parameters of the source are not indicated in this report so the results summarized are relative. The Monte Carlo simulation results indicate the probability of detection of the RN source at certain distances from the detector which is a function of transverse speed and instrument sensitivity for the specified RN source.« less

  14. Monte Carlo analysis of the Titan III/Transfer Orbit Stage guidance system for the Mars Observer mission

    NASA Astrophysics Data System (ADS)

    Bell, Stephen C.; Ginsburg, Marc A.; Rao, Prabhakara P.

    An important part of space launch vehicle mission planning for a planetary mission is the integrated analysis of guidance and performance dispersions for both booster and upper stage vehicles. For the Mars Observer mission, an integrated trajectory analysis was used to maximize the scientific payload and to minimize injection errors by optimizing the energy management of both vehicles. This was accomplished by designing the Titan III booster vehicle to inject into a hyperbolic departure plane, and the Transfer Orbit Stage (TOS) to correct any booster dispersions. An integrated Monte Carlo analysis of the performance and guidance dispersions of both vehicles provided sensitivities, an evaluation of their guidance schemes and an injection error covariance matrix. The polynomial guidance schemes used for the Titan III variable flight azimuth computations and the TOS solid rocket motor ignition time and burn direction derivations accounted for a wide variation of launch times, performance dispersions, and target conditions. The Mars Observer spacecraft was launched on 25 September 1992 on the Titan III/TOS vehicle. The post flight analysis indicated that a near perfect park orbit injection was achieved, followed by a trans-Mars injection with less than 2sigma errors.

  15. A novel regenerative shock absorber with a speed doubling mechanism and its Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Ran; Wang, Xu; Liu, Zhenwei

    2018-03-01

    A novel regenerative shock absorber has been designed and fabricated. The novelty of the presented work is the application of the double speed regenerative shock absorber that utilizes the rack and pinion mechanism to increase the magnet speed with respect to the coils for higher power output. The simulation models with parameters identified from finite element analysis and the experiments are developed. The proposed regenerative shock absorber is compared with the regenerative shock absorber without the rack and pinion mechanism, when they are integrated into the same quarter vehicle suspension system. The sinusoidal wave road profile displacement excitation and the random road profile displacement excitation with peak amplitude of 0.035 m are applied as the inputs in the frequency range of 0-25 Hz. It is found that with the sinusoidal and random road profile displacement input, the proposed innovative design can increase the output power by 4 times comparing to the baseline design. The proposed double speed regenerative shock absorber also presents to be more sensitive to the road profile irregularity than the single speed regenerative shock absorber as suggested by Monte Carlo simulation. Lastly the coil mass and amplification factor are studied for sensitivity analysis and performance optimization, which provides a general design method of the regenerative shock absorbers. It shows that for the system power output, the proposed design becomes more sensitive to either the coil mass or amplification factor depending on the amount of the coil mass. With the specifically selected combination of the coil mass and amplification factor, the optimized energy harvesting performance can be achieved.

  16. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  17. Validation of the Monte Carlo simulator GATE for indium-111 imaging.

    PubMed

    Assié, K; Gardin, I; Véra, P; Buvat, I

    2005-07-07

    Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.

  18. Gray: a ray tracing-based Monte Carlo simulator for PET

    NASA Astrophysics Data System (ADS)

    Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.

    2018-05-01

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  19. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  20. Monte Carlo simulation of non-invasive glucose measurement based on FMCW LIDAR

    NASA Astrophysics Data System (ADS)

    Xiong, Bing; Wei, Wenxiong; Liu, Nan; He, Jian-Jun

    2010-11-01

    Continuous non-invasive glucose monitoring is a powerful tool for the treatment and management of diabetes. A glucose measurement method, with the potential advantage of miniaturizability with no moving parts, based on the frequency modulated continuous wave (FMCW) LIDAR technology is proposed and investigated. The system mainly consists of an integrated near-infrared tunable semiconductor laser and a detector, using heterodyne technology to convert the signal from time-domain to frequency-domain. To investigate the feasibility of the method, Monte Carlo simulations have been performed on tissue phantoms with optical parameters similar to those of human interstitial fluid. The simulation showed that the sensitivity of the FMCW LIDAR system to glucose concentration can reach 0.2mM. Our analysis suggests that the FMCW LIDAR technique has good potential for noninvasive blood glucose monitoring.

  1. A Monte Carlo Study of Lambda Hyperon Polarization at BM@N

    NASA Astrophysics Data System (ADS)

    Suvarieva, D.; Gudima, K.; Zinchenko, A.

    2018-03-01

    Heavy strange objects (hyperons) can provide essential signatures of the excited and compressed baryonic matter. At NICA, it is planned to study hyperons both in the collider mode (MPD detector) and the fixed-target one (BM@N setup). Measurements of strange hyperon polarization can give additional information on the strong interaction mechanisms. In heavy-ion collisions, such measurements are even more valuable since the polarization is expected to be sensitive to characteristics of the QCD medium (vorticity, hydrodynamic helicity) and to QCD anomalous transport. In this analysis, the possibility to measure at BM@N the polarization of the lightest strange hyperon Λ is studied in Monte Carlo event samples of Au + Au collisions produced with the DCM-QGSM generator. It is shown that the detector will allow to measure polarization with a precision required to check the model predictions.

  2. Study the sensitivity of dose calculation in prism treatment planning system using Monte Carlo simulation of 6 MeV electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardiansyah, D.; Haryanto, F.; Male, S.

    2014-09-30

    Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file ismore » used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.« less

  3. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  4. Estimating the Standard Error of Robust Regression Estimates.

    DTIC Science & Technology

    1987-03-01

    error is 0(n4/5). In another Monte Carlo study, McKean and Schrader (1984) found that the tests resulting from studentizing ; by _3d/1/2 with d =0(n4 /5...44 4 -:~~-~*v: -. *;~ ~ ~*t .~ # ~ 44 % * ~ .%j % % % * . ., ~ -%. -14- Sheather, S. J. and McKean, J. W. (1987). A comparison of testing and...Wiley, New York. Welsch, R. E. (1980). Regression Sensitivity Analysis and Bounded- Influence Estimation, in Evaluation of Econometric Models eds. J

  5. Diffusion of oxygen interstitials in UO2+x using kinetic Monte Carlo simulations: Role of O/M ratio and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Behera, Rakesh K.; Watanabe, Taku; Andersson, David A.; Uberuaga, Blas P.; Deo, Chaitanya S.

    2016-04-01

    Oxygen interstitials in UO2+x significantly affect the thermophysical properties and microstructural evolution of the oxide nuclear fuel. In hyperstoichiometric Urania (UO2+x), these oxygen interstitials form different types of defect clusters, which have different migration behavior. In this study we have used kinetic Monte Carlo (kMC) to evaluate diffusivities of oxygen interstitials accounting for mono- and di-interstitial clusters. Our results indicate that the predicted diffusivities increase significantly at higher non-stoichiometry (x > 0.01) for di-interstitial clusters compared to a mono-interstitial only model. The diffusivities calculated at higher temperatures compare better with experimental values than at lower temperatures (< 973 K). We have discussed the resulting activation energies achieved for diffusion with all the mono- and di-interstitial models. We have carefully performed sensitivity analysis to estimate the effect of input di-interstitial binding energies on the predicted diffusivities and activation energies. While this article only discusses mono- and di-interstitials in evaluating oxygen diffusion response in UO2+x, future improvements to the model will primarily focus on including energetic definitions of larger stable interstitial clusters reported in the literature. The addition of larger clusters to the kMC model is expected to improve the comparison of oxygen transport in UO2+x with experiment.

  6. Variability Analysis of MOS Differential Amplifier

    NASA Astrophysics Data System (ADS)

    Aoki, Masakazu; Seto, Kenji; Yamawaki, Taizo; Tanaka, Satoshi

    Variation characteristics in MOS differential amplifier are evaluated by using the concise statistical model parameters for SPICE simulation. We find that the variation in the differential-mode gain, Adm, induced by the current factor variation, Δβ0, in the Id-variation of the differential MOS transistors is more than one order of magnitude larger than that induced by the threshold voltage variation, ΔVth, which has been regarded as a major factor for circuit variations in SoC's (2). The results obtained by the Monte Carlo simulations are verified by the theoretical analysis combined with the sensitivity analysis which clarifies the specific device parameter dependences of the variation in Adm.

  7. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC.

    PubMed

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R

    2017-07-12

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.

  8. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    PubMed

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  9. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis

    NASA Astrophysics Data System (ADS)

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  10. Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.

    PubMed

    Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R

    2013-03-01

    Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.

  11. Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors

    PubMed Central

    Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.

    2013-01-01

    Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241

  12. The Cherenkov Telescope Array production system for Monte Carlo simulations and analysis

    NASA Astrophysics Data System (ADS)

    Arrabito, L.; Bernloehr, K.; Bregeon, J.; Cumani, P.; Hassan, T.; Haupt, A.; Maier, G.; Moralejo, A.; Neyroud, N.; pre="for the"> CTA Consortium, DIRAC Consortium,

    2017-10-01

    The Cherenkov Telescope Array (CTA), an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale, is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27 PB/year, including archive and data processing. The start of CTA operation is foreseen in 2018 and it will last about 30 years. The installation of the first telescopes in the two selected locations (Paranal, Chile and La Palma, Spain) will start in 2017. In order to select the best site candidate to host CTA telescopes (in the Northern and in the Southern hemispheres), massive Monte Carlo simulations have been performed since 2012. Once the two sites have been selected, we have started new Monte Carlo simulations to determine the optimal array layout with respect to the obtained sensitivity. Taking into account that CTA may be finally composed of 7 different telescope types coming in 3 different sizes, many different combinations of telescope position and multiplicity as a function of the telescope type have been proposed. This last Monte Carlo campaign represented a huge computational effort, since several hundreds of telescope positions have been simulated, while for future instrument response function simulations, only the operating telescopes will be considered. In particular, during the last 18 months, about 2 PB of Monte Carlo data have been produced and processed with different analysis chains, with a corresponding overall CPU consumption of about 125 M HS06 hours. In these proceedings, we describe the employed computing model, based on the use of grid resources, as well as the production system setup, which relies on the DIRAC interware. Finally, we present the envisaged evolutions of the CTA production system for the off-line data processing during CTA operations and the instrument response function simulations.

  13. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  14. B-value and slip rate sensitivity analysis for PGA value in Lembang fault and Cimandiri fault area

    NASA Astrophysics Data System (ADS)

    Pratama, Cecep; Ito, Takeo; Meilano, Irwan; Nugraha, Andri Dian

    2017-07-01

    We examine slip rate and b-value contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedence in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi and Bandung using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Uncertainty and coefficient of variation from slip rate and b-value in Lembang and Cimandiri Fault area have been calculated. We observe that seismic hazard estimates are sensitive to fault slip rate and b-value with uncertainty result are 0.25 g dan 0.1-0.2 g, respectively. For specific site, we found seismic hazard estimate are 0.49 + 0.13 g with COV 27% and 0.39 + 0.05 g with COV 13% for Sukabumi and Bandung, respectively.

  15. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  16. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  17. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mather, Barry

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less

  18. Expendable vs reusable propulsion systems cost sensitivity

    NASA Technical Reports Server (NTRS)

    Hamaker, Joseph W.; Dodd, Glenn R.

    1989-01-01

    One of the key trade studies that must be considered when studying any new space transportation hardware is whether to go reusable or expendable. An analysis is presented here for such a trade relative to a proposed Liquid Rocket Booster which is being studied at MSFC. The assumptions or inputs to the trade were developed and integrated into a model that compares the Life-Cycle Costs of both a reusable LRB and an expendable LRB. Sensitivities were run by varying the input variables to see their effect on total cost. In addition a Monte-Carlo simulation was run to determine the amount of cost risk that may be involved in a decision to reuse or expend.

  19. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  20. A Monte Carlo simulation of advanced HIV disease: application to prevention of CMV infection.

    PubMed

    Paltiel, A D; Scharfstein, J A; Seage, G R; Losina, E; Goldie, S J; Weinstein, M C; Craven, D E; Freedberg, K A

    1998-01-01

    Disagreement exists among decision makers regarding the allocation of limited HIV patient care resources and, specifically, the comparative value of preventing opportunistic infections in late-stage disease. A Monte Carlo simulation framework was used to evaluate a state-transition model of the natural history of HIV illness in patients with CD4 counts below 300/mm3 and to project the costs and consequences of alternative strategies for preventing AIDS-related complications. The authors describe the model and demonstrate how it may be employed to assess the cost-effectiveness of oral ganciclovir for prevention of cytomegalovirus (CMV) infection. Ganciclovir prophylaxis confers an estimated additional 0.7 quality-adjusted month of life at a net cost of $10,700, implying an incremental cost-effectiveness ratio of roughly $173,000 per quality-adjusted life year gained. Sensitivity analysis reveals that this baseline result is stable over a wide range of input data estimates, including quality of life and drug efficacy, but it is sensitive to CMV incidence and drug price assumptions. The Monte Carlo simulation framework offers decision makers a powerful and flexible tool for evaluating choices in the realm of chronic disease patient care. The authors have used it to assess HIV-related treatment options and continue to refine it to reflect advances in defining the pathogenesis and treatment of AIDS. Compared with alternative interventions, CMV prophylaxis does not appear to be a cost-effective use of scarce HIV clinical care funds. However, targeted prevention in patients identified to be at higher risk for CMV-related disease may warrant consideration.

  1. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  2. Error analysis of Dobson spectrophotometer measurements of the total ozone content

    NASA Technical Reports Server (NTRS)

    Holland, A. C.; Thomas, R. W. L.

    1975-01-01

    A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.

  3. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis☆

    PubMed Central

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-01-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987

  4. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  5. Measuring the Return on Investment and Real Option Value of Weather Sensor Bundles for Air Force Unmanned Aerial Vehicles

    DTIC Science & Technology

    2016-04-30

    Åèìáëáíáçå=oÉëÉ~êÅÜ=mêçÖê~ã= dê~Çì~íÉ=pÅÜççä=çÑ=_ìëáåÉëë=C=mìÄäáÅ=mçäáÅó= k~î~ä=mçëíÖê~Çì~íÉ=pÅÜççä= SYM-AM- 16 -023 mêçÅÉÉÇáåÖë= çÑ=íÜÉ...November). The federal role in meteorological services and supporting research: A half- century of multi-agency collaboration (FCM-17-2013). Retrieved...Process ROI on Weather-Now Forecasting Sensitivity Analysis 16 IRM Monte Carlo Risk Simulations: Mission Execution 17 IRM Monte Carlo Risk

  6. A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability

    PubMed Central

    Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.

    2012-01-01

    Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793

  7. Monte Carlo analysis of TRX lattices with ENDF/B version 3 data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, J. Jr.

    1975-03-01

    Four TRX water-moderated lattices of slightly enriched uranium rods have been reanalyzed with consistent ENDF/B Version 3 data by means of the full-range Monte Carlo program RECAP. The following measured lattice parameters were studied: ratio of epithermal-to-thermal $sup 238$U capture, ratio of epithermal- to-thermal $sup 235$U fissions, ration of $sup 238$U captures to $sup 235$U fissions, ratio of $sup 238$U fissions to $sup 235$U fissions, and multiplication factor. In addition to the base calculations, some studies were done to find sensitivity of the TRX lattice parameters to selected variations of cross section data. Finally, additional experimental evidence is afforded bymore » effective $sup 238$U capture integrals for isolated rods. Shielded capture integrals were calculated for $sup 238$U metal and oxide rods. These are compared with other measurements. (auth)« less

  8. A Monte Carlo model for photoneutron generation by a medical LINAC

    NASA Astrophysics Data System (ADS)

    Sumini, M.; Isolan, L.; Cucchi, G.; Sghedoni, R.; Iori, M.

    2017-11-01

    For an optimal tuning of the radiation protection planning, a Monte Carlo model using the MCNPX code has been built, allowing an accurate estimate of the spectrometric and geometrical characteristics of photoneutrons generated by a Varian TrueBeam Stx© medical linear accelerator. We considered in our study a device working at the reference energy for clinical applications of 15 MV, stemmed from a Varian Clinac©2100 modeled starting from data collected thanks to several papers available in the literature. The model results were compared with neutron and photon dose measurements inside and outside the bunker hosting the accelerator obtaining a complete dose map. Normalized neutron fluences were tallied in different positions at the patient plane and at different depths. A sensitivity analysis with respect to the flattening filter material were performed to enlighten aspects that could influence the photoneutron production.

  9. Measurements of underlying-event properties using neutral and charged particles in pp collisions at $$\\sqrt{s}=900$$ GeV and $$\\sqrt{s}=7$$ TeV with the ATLAS detector at the LHC

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2011-05-10

    We present first measurements of charged and neutral particle-flow correlations in pp collisions using the ATLAS calorimeters. Data were collected in 2009 and 2010 at centre-of-mass energies of 900 GeV and 7 TeV. Events were selected using a minimum-bias trigger which required a charged particle in scintillation counters on either side of the interaction point. Particle flows, sensitive to the underlying event, are measured using clusters of energy in the ATLAS calorimeters, taking advantage of their fine granularity. No Monte Carlo generator used in this analysis can accurately describe the measurements. The results are independent of those based on chargedmore » particles measured by the ATLAS tracking systems and can be used to constrain the parameters of Monte Carlo generators.« less

  10. Gray: a ray tracing-based Monte Carlo simulator for PET.

    PubMed

    Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S

    2018-05-21

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  11. Influence of the quality of intraoperative fluoroscopic images on the spatial positioning accuracy of a CAOS system.

    PubMed

    Wang, Junqiang; Wang, Yu; Zhu, Gang; Chen, Xiangqian; Zhao, Xiangrui; Qiao, Huiting; Fan, Yubo

    2018-06-01

    Spatial positioning accuracy is a key issue in a computer-assisted orthopaedic surgery (CAOS) system. Since intraoperative fluoroscopic images are one of the most important input data to the CAOS system, the quality of these images should have a significant influence on the accuracy of the CAOS system. But the regularities and mechanism of the influence of the quality of intraoperative images on the accuracy of a CAOS system have yet to be studied. Two typical spatial positioning methods - a C-arm calibration-based method and a bi-planar positioning method - are used to study the influence of different image quality parameters, such as resolution, distortion, contrast and signal-to-noise ratio, on positioning accuracy. The error propagation rules of image error in different spatial positioning methods are analyzed by the Monte Carlo method. Correlation analysis showed that resolution and distortion had a significant influence on spatial positioning accuracy. In addition the C-arm calibration-based method was more sensitive to image distortion, while the bi-planar positioning method was more susceptible to image resolution. The image contrast and signal-to-noise ratio have no significant influence on the spatial positioning accuracy. The result of Monte Carlo analysis proved that generally the bi-planar positioning method was more sensitive to image quality than the C-arm calibration-based method. The quality of intraoperative fluoroscopic images is a key issue in the spatial positioning accuracy of a CAOS system. Although the 2 typical positioning methods have very similar mathematical principles, they showed different sensitivities to different image quality parameters. The result of this research may help to create a realistic standard for intraoperative fluoroscopic images for CAOS systems. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Using the Reliability Theory for Assessing the Decision Confidence Probability for Comparative Life Cycle Assessments.

    PubMed

    Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2016-03-01

    Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.

  13. A statistical approach to nuclear fuel design and performance

    NASA Astrophysics Data System (ADS)

    Cunning, Travis Andrew

    As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance, with an average sensitivity index of 48.93% on key output quantities. Pellet grain size and dish depth are also significant contributors, at 31.53% and 13.46%, respectively. A traditional limit of operating envelope case is also evaluated. This case produces output values that exceed the maximum values observed during the 105 Monte Carlo trials for all output quantities of interest. In many cases the difference between the predictions of the two methods is very prominent, and the highly conservative nature of the deterministic approach is demonstrated. A reliability analysis of CANDU fuel manufacturing parametric data, specifically pertaining to the quantification of fuel performance margins, has not been conducted previously. Key Words: CANDU, nuclear fuel, Cameco, fuel manufacturing, fuel modelling, fuel performance, fuel reliability, ELESTRES, ELOCA, dimensional reduction methods, global sensitivity analysis, deterministic safety analysis, probabilistic safety analysis.

  14. A probabilistic sizing tool and Monte Carlo analysis for entry vehicle ablative thermal protection systems

    NASA Astrophysics Data System (ADS)

    Mazzaracchio, Antonio; Marchetti, Mario

    2010-03-01

    Implicit ablation and thermal response software was developed to analyse and size charring ablative thermal protection systems for entry vehicles. A statistical monitor integrated into the tool, which uses the Monte Carlo technique, allows a simulation to run over stochastic series. This performs an uncertainty and sensitivity analysis, which estimates the probability of maintaining the temperature of the underlying material within specified requirements. This approach and the associated software are primarily helpful during the preliminary design phases of spacecraft thermal protection systems. They are proposed as an alternative to traditional approaches, such as the Root-Sum-Square method. The developed tool was verified by comparing the results with those from previous work on thermal protection system probabilistic sizing methodologies, which are based on an industry standard high-fidelity ablation and thermal response program. New case studies were analysed to establish thickness margins on sizing heat shields that are currently proposed for vehicles using rigid aeroshells for future aerocapture missions at Neptune, and identifying the major sources of uncertainty in the material response.

  15. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  16. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  17. Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications

    NASA Astrophysics Data System (ADS)

    Li, Fusheng

    Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder platform and has a user-friendly interface to accomplish all qualitative and quantitative tasks easily. That is to say, the software enables users to run the forward Monte Carlo simulation (if necessary) or use previously calculated Monte Carlo library spectra to obtain the sample elemental composition estimation within a minute. The GUI software is easy to use with user-friendly features and has the capability to accomplish all related tasks in a visualization environment. It can be a powerful tool for EDXRF analysts. A reproducible experiment setup has been built and experiments have been performed to benchmark the system. Two types of Standard Reference Materials (SRM), stainless steel samples from National Institute of Standards and Technology (NIST) and aluminum alloy samples from Alcoa Inc., with certified elemental compositions, are tested with this reproducible prototype system using a 109Cd radioisotope source (20mCi) and a liquid nitrogen cooled Si(Li) detector. The results show excellent agreement between the calculated sample compositions and their reference values and the approach is very fast.

  18. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC

    PubMed Central

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.

    2017-01-01

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958

  19. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  20. Statistical analysis of flight times for space shuttle ferry flights

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Perlmutter, M.

    1974-01-01

    Markov chain and Monte Carlo analysis techniques are applied to the simulated Space Shuttle Orbiter Ferry flights to obtain statistical distributions of flight time duration between Edwards Air Force Base and Kennedy Space Center. The two methods are compared, and are found to be in excellent agreement. The flights are subjected to certain operational and meteorological requirements, or constraints, which cause eastbound and westbound trips to yield different results. Persistence of events theory is applied to the occurrence of inclement conditions to find their effect upon the statistical flight time distribution. In a sensitivity test, some of the constraints are varied to observe the corresponding changes in the results.

  1. Uncertainty Analysis of the Grazing Flow Impedance Tube

    NASA Technical Reports Server (NTRS)

    Brown, Martha C.; Jones, Michael G.; Watson, Willie R.

    2012-01-01

    This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.

  2. Fragmentation uncertainties in hadronic observables for top-quark mass measurements

    NASA Astrophysics Data System (ADS)

    Corcella, Gennaro; Franceschini, Roberto; Kim, Doojin

    2018-04-01

    We study the Monte Carlo uncertainties due to modeling of hadronization and showering in the extraction of the top-quark mass from observables that use exclusive hadronic final states in top decays, such as t →anything + J / ψ or t →anything + (B →charged tracks), where B is a B-hadron. To this end, we investigate the sensitivity of the top-quark mass, determined by means of a few observables already proposed in the literature as well as some new proposals, to the relevant parameters of event generators, such as HERWIG 6 and PYTHIA 8. We find that constraining those parameters at O (1%- 10%) is required to avoid a Monte Carlo uncertainty on mt greater than 500 MeV. For the sake of achieving the needed accuracy on such parameters, we examine the sensitivity of the top-quark mass measured from spectral features, such as peaks, endpoints and distributions of EB, mBℓ, and some mT2-like variables. We find that restricting oneself to regions sufficiently close to the endpoints enables one to substantially decrease the dependence on the Monte Carlo parameters, but at the price of inflating significantly the statistical uncertainties. To ameliorate this situation we study how well the data on top-quark production and decay at the LHC can be utilized to constrain the showering and hadronization variables. We find that a global exploration of several calibration observables, sensitive to the Monte Carlo parameters but very mildly to mt, can offer useful constraints on the parameters, as long as such quantities are measured with a 1% precision.

  3. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    PubMed

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A Monte-Carlo Analysis of Organic Volatility with Aerosol Microphysics

    NASA Astrophysics Data System (ADS)

    Gao, Chloe; Tsigaridis, Kostas; Bauer, Susanne E.

    2017-04-01

    A newly developed box model, MATRIX-VBS, includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves aerosol mass and number concentrations and aerosol mixing state. The new scheme advanced the representation of organic aerosols in models by improving the traditional and simplistic treatment of organic aerosols as non-volatile and with a fixed size distribution. Further development includes adding the condensation of organics on coarse mode aerosols - dust and sea salt, thus making all organics in the system semi-volatile. To test and simplify the model, a Monte-Carlo analysis is performed to pin point which processes affect organics the most under varied chemical and meteorological conditions. Since the model's parameterizations have the ability to capture a very wide range of conditions, all possible scenarios on Earth across the whole parameter space, including temperature, humidity, location, emissions and oxidant levels, are examined. The Monte-Carlo simulations provide quantitative information on the sensitivity of the newly developed model and help us understand how organics are affecting the size distribution, mixing state and volatility distribution at varying levels of meteorological conditions and pollution levels. In addition, these simulations give information on which parameters play a critical role in the aerosol distribution and evolution in the atmosphere and which do not, that will facilitate the simplification of the box model, an important step in its implementation in the global model GISS ModelE as a module.

  5. Probing CP violation in $$h\\rightarrow\\gamma\\gamma$$ with converted photons

    DOE PAGES

    Bishara, Fady; Grossman, Yuval; Harnik, Roni; ...

    2014-04-11

    We study Higgs diphoton decays, in which both photons undergo nuclear conversion to electron- positron pairs. The kinematic distribution of the two electron-positron pairs may be used to probe the CP violating (CPV) coupling of the Higgs to photons, that may be produced by new physics. Detecting CPV in this manner requires interference between the spin-polarized helicity amplitudes for both conversions. We derive leading order, analytic forms for these amplitudes. In turn, we obtain compact, leading-order expressions for the full process rate. While performing experiments involving photon conversions may be challenging, we use the results of our analysis to constructmore » experimental cuts on certain observables that may enhance sensitivity to CPV. We show that there exist regions of phase space on which sensitivity to CPV is of order unity. As a result, the statistical sensitivity of these cuts are verified numerically, using dedicated Monte-Carlo simulations.« less

  6. Structure sensitivity in oxide catalysis: First-principles kinetic Monte Carlo simulations for CO oxidation at RuO 2(111)

    DOE PAGES

    Wang, Tongyu; Reuter, Karsten

    2015-11-24

    We present a density-functional theory based kinetic Monte Carlo study of CO oxidation at the (111) facet of RuO 2. We compare the detailed insight into elementary processes, steady-state surface coverages, and catalytic activity to equivalent published simulation data for the frequently studied RuO 2(110) facet. Qualitative differences are identified in virtually every aspect ranging from binding energetics over lateral interactions to the interplay of elementary processes at the different active sites. Nevertheless, particularly at technologically relevant elevated temperatures, near-ambient pressures and near-stoichiometric feeds both facets exhibit almost identical catalytic activity. As a result, these findings challenge the traditional definitionmore » of structure sensitivity based on macroscopically observable turnover frequencies and prompt scrutiny of the applicability of structure sensitivity classifications developed for metals to oxide catalysis.« less

  7. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  8. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  9. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  10. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  11. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  12. Probabilistic Aeroelastic Analysis of Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.

    2004-01-01

    A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.

  13. Probabilistic calibration of the distributed hydrological model RIBS applied to real-time flood forecasting: the Harod river basin case study (Israel)

    NASA Astrophysics Data System (ADS)

    Nesti, Alice; Mediero, Luis; Garrote, Luis; Caporali, Enrica

    2010-05-01

    An automatic probabilistic calibration method for distributed rainfall-runoff models is presented. The high number of parameters in hydrologic distributed models makes special demands on the optimization procedure to estimate model parameters. With the proposed technique it is possible to reduce the complexity of calibration while maintaining adequate model predictions. The first step of the calibration procedure of the main model parameters is done manually with the aim to identify their variation range. Afterwards a Monte-Carlo technique is applied, which consists on repetitive model simulations with randomly generated parameters. The Monte Carlo Analysis Toolbox (MCAT) includes a number of analysis methods to evaluate the results of these Monte Carlo parameter sampling experiments. The study investigates the use of a global sensitivity analysis as a screening tool to reduce the parametric dimensionality of multi-objective hydrological model calibration problems, while maximizing the information extracted from hydrological response data. The method is applied to the calibration of the RIBS flood forecasting model in the Harod river basin, placed on Israel. The Harod basin has an extension of 180 km2. The catchment has a Mediterranean climate and it is mainly characterized by a desert landscape, with a soil that is able to absorb large quantities of rainfall and at the same time is capable to generate high peaks of discharge. Radar rainfall data with 6 minute temporal resolution are available as input to the model. The aim of the study is the validation of the model for real-time flood forecasting, in order to evaluate the benefits of improved precipitation forecasting within the FLASH European project.

  14. Combining Monte Carlo methods with coherent wave optics for the simulation of phase-sensitive X-ray imaging

    PubMed Central

    Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco

    2014-01-01

    Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652

  15. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  16. Cost-minimization analysis favours intravenous ferric carboxymaltose over ferric sucrose or oral iron as preoperative treatment in patients with colon cancer and iron deficiency anaemia.

    PubMed

    Calvet, Xavier; Gené, Emili; ÀngelRuíz, Miquel; Figuerola, Ariadna; Villoria, Albert; Cucala, Mercedes; Mearin, Fermín; Delgado, Salvadora; Calleja, Jose Luis

    2016-01-01

    Ferric Carboxymaltose (FCM), Iron Sucrose (IS) and Oral Iron (OI) are alternative treatments for preoperative anaemia. To compare the cost implications, using a cost-minimization analysis, of three alternatives: FCM vs. IS vs. OI for treating iron-deficient anaemia before surgery in patients with colon cancer. Data from 282 patients with colorectal cancer and anaemia were obtained from a previous study. One hundred and eleven received FCS, 16 IS and 155 OI. Costs of intravenous iron drugs were obtained from the Spanish Regulatory Agency. Direct and indirect costs were obtained from the analytical accounting unit of the Hospital. In the base case mean costs per patient were calculated. Sensitivity analysis and probabilistic Monte Carlo simulation were performed. Total costs per patient were 1827® in the FCM group, 2312® in the IS group and 2101® in the OI group. Cost savings per patient for FCM treatment were 485® compared to IS and 274® compared to OI. A Monte Carlo simulation favoured the use of FCM in 84.7% and 84.4% of simulations when compared to IS and OI, respectively. FCM infusion before surgery reduced costs in patients with colon cancer and iron-deficiency anaemia when compared with OI and IS.

  17. Uncertainty in the evaluation of the Predicted Mean Vote index using Monte Carlo analysis.

    PubMed

    Ricciu, R; Galatioto, A; Desogus, G; Besalduch, L A

    2018-06-06

    Today, evaluation of thermohygrometric indoor conditions is one of the most useful tools for building design and re-design and can be used to determine energy consumption in conditioned buildings. Since the beginning of the Predicted Mean Vote index (PMV), researchers have thoroughly investigated its issues in order to reach more accurate results; however, several shortcomings have yet to be solved. Among them is the uncertainty of environmental and subjective parameters linked to the standard PMV approach of ISO 7730 that classifies the thermal environment. To this end, this paper discusses the known thermal comfort models and the measurement approaches, paying particular attention to measurement uncertainties and their influence on PMV determination. Monte Carlo analysis has been applied on a data series in a "black-box" environment, and each involved parameter has been analysed in the PMV range from -0.9 to 0.9 under different Relative Humidity conditions. Furthermore, a sensitivity analysis has been performed in order to define the role of each variable. The results showed that an uncertainty propagation method could improve PMV model application, especially where it should be very accurate (-0.2 < PMV<0.2 range; winter season with Relative Humidity of 30%). Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  19. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  20. Continuous-energy eigenvalue sensitivity coefficient calculations in TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, C. M.; Rearden, B. T.

    2013-07-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several test problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and a low memory footprint, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations. (authors)

  1. Development of a SCALE Tool for Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several criticality safety problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and low memory requirements, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations.

  2. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsoulakis, Markos

    2014-08-09

    Our two key accomplishments in the first three years were towards the development of, (1) a mathematically rigorous and at the same time computationally flexible framework for parallelization of Kinetic Monte Carlo methods, and its implementation on GPUs, and (2) spatial multilevel coarse-graining methods for Monte Carlo sampling and molecular simulation. A common underlying theme in both these lines of our work is the development of numerical methods which are at the same time both computationally efficient and reliable, the latter in the sense that they provide controlled-error approximations for coarse observables of the simulated molecular systems. Finally, our keymore » accomplishment in the last year of the grant is that we started developing (3) pathwise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of nonequilibrium extended (high-dimensional) systems. We discuss these three research directions in some detail below, along with the related publications.« less

  3. Reconstruction of Exposure to m-Xylene from Human Biomonitoring Data Using PBPK Modelling, Bayesian Inference, and Markov Chain Monte Carlo Simulation

    PubMed Central

    McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George

    2012-01-01

    There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure. PMID:22719759

  4. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  5. Estimation of biomedical optical properties by simultaneous use of diffuse reflectometry and photothermal radiometry: investigation of light propagation models

    NASA Astrophysics Data System (ADS)

    Fonseca, E. S. R.; de Jesus, M. E. P.

    2007-07-01

    The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.

  6. A Monte Carlo analysis of breast screening randomized trials.

    PubMed

    Zamora, Luis I; Forastero, Cristina; Guirado, Damián; Lallena, Antonio M

    2016-12-01

    To analyze breast screening randomized trials with a Monte Carlo simulation tool. A simulation tool previously developed to simulate breast screening programmes was adapted for that purpose. The history of women participating in the trials was simulated, including a model for survival after local treatment of invasive cancers. Distributions of time gained due to screening detection against symptomatic detection and the overall screening sensitivity were used as inputs. Several randomized controlled trials were simulated. Except for the age range of women involved, all simulations used the same population characteristics and this permitted to analyze their external validity. The relative risks obtained were compared to those quoted for the trials, whose internal validity was addressed by further investigating the reasons of the disagreements observed. The Monte Carlo simulations produce results that are in good agreement with most of the randomized trials analyzed, thus indicating their methodological quality and external validity. A reduction of the breast cancer mortality around 20% appears to be a reasonable value according to the results of the trials that are methodologically correct. Discrepancies observed with Canada I and II trials may be attributed to a low mammography quality and some methodological problems. Kopparberg trial appears to show a low methodological quality. Monte Carlo simulations are a powerful tool to investigate breast screening controlled randomized trials, helping to establish those whose results are reliable enough to be extrapolated to other populations and to design the trial strategies and, eventually, adapting them during their development. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Monte Carlo simulation: Its status and future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murtha, J.A.

    1997-04-01

    Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less

  8. Quantitative performance targets by using balanced scorecard system: application to waste management and public administration.

    PubMed

    Mendes, Paula; Nunes, Luis Miguel; Teixeira, Margarida Ribau

    2014-09-01

    This article demonstrates how decision-makers can be guided in the process of defining performance target values in the balanced scorecard system. We apply a method based on sensitivity analysis with Monte Carlo simulation to the municipal solid waste management system in Loulé Municipality (Portugal). The method includes two steps: sensitivity analysis of performance indicators to identify those performance indicators with the highest impact on the balanced scorecard model outcomes; and sensitivity analysis of the target values for the previously identified performance indicators. Sensitivity analysis shows that four strategic objectives (IPP1: Comply with the national waste strategy; IPP4: Reduce nonrenewable resources and greenhouse gases; IPP5: Optimize the life-cycle of waste; and FP1: Meet and optimize the budget) alone contribute 99.7% of the variability in overall balanced scorecard value. Thus, these strategic objectives had a much stronger impact on the estimated balanced scorecard outcome than did others, with the IPP1 and the IPP4 accounting for over 55% and 22% of the variance in overall balanced scorecard value, respectively. The remaining performance indicators contribute only marginally. In addition, a change in the value of a single indicator's target value made the overall balanced scorecard value change by as much as 18%. This may lead to involuntarily biased decisions by organizations regarding performance target-setting, if not prevented with the help of methods such as that proposed and applied in this study. © The Author(s) 2014.

  9. A Monte-Carlo Analysis of Organic Aerosol Volatility with Aerosol Microphysics

    NASA Astrophysics Data System (ADS)

    Gao, C. Y.; Tsigaridis, K.; Bauer, S. E.

    2016-12-01

    A newly developed box model scheme, MATRIX-VBS, includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves aerosol mass and number concentrations and aerosol mixing state. The new scheme advanced the representation of organic aerosols in Earth system models by improving the traditional and simplistic treatment of organic aerosols as non-volatile and with a fixed size distribution. Further development includes adding the condensation of organics on coarse mode aerosols - dust and sea salt, thus making all organics in the system semi-volatile. To test and simplify the model, a Monte-Carlo analysis is performed to pin point which processes affect organics the most under which chemical and meteorological conditions. Since the model's parameterizations have the ability to capture a very wide range of conditions, from very clean to very polluted and for a wide range of meteorological conditions, all possible scenarios on Earth across the whole parameter space, including temperature, location, emissions and oxidant levels, are examined. The Monte-Carlo simulations provide quantitative information on the sensitivity of the newly developed model and help us understand how organics are affecting the size distribution, mixing state and volatility distribution at varying levels of meteorological conditions and pollution levels. In addition, these simulations give information on which parameters play a critical role in the aerosol distribution and evolution in the atmosphere and which do not, that will facilitate the simplification of the box model, an important step in its implementation in the global model.

  10. Accurate Monte Carlo simulations for nozzle design, commissioning and quality assurance for a proton radiation therapy facility.

    PubMed

    Paganetti, H; Jiang, H; Lee, S Y; Kooy, H M

    2004-07-01

    Monte Carlo dosimetry calculations are essential methods in radiation therapy. To take full advantage of this tool, the beam delivery system has to be simulated in detail and the initial beam parameters have to be known accurately. The modeling of the beam delivery system itself opens various areas where Monte Carlo calculations prove extremely helpful, such as for design and commissioning of a therapy facility as well as for quality assurance verification. The gantry treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH) were modeled in detail using the GEANT4.5.2 Monte Carlo code. For this purpose, various novel solutions for simulating irregular shaped objects in the beam path, like contoured scatterers, patient apertures or patient compensators, were found. The four-dimensional, in time and space, simulation of moving parts, such as the modulator wheel, was implemented. Further, the appropriate physics models and cross sections for proton therapy applications were defined. We present comparisons between measured data and simulations. These show that by modeling the treatment nozzle with millimeter accuracy, it is possible to reproduce measured dose distributions with an accuracy in range and modulation width, in the case of a spread-out Bragg peak (SOBP), of better than 1 mm. The excellent agreement demonstrates that the simulations can even be used to generate beam data for commissioning treatment planning systems. The Monte Carlo nozzle model was used to study mechanical optimization in terms of scattered radiation and secondary radiation in the design of the nozzles. We present simulations on the neutron background. Further, the Monte Carlo calculations supported commissioning efforts in understanding the sensitivity of beam characteristics and how these influence the dose delivered. We present the sensitivity of dose distributions in water with respect to various beam parameters and geometrical misalignments. This allows the definition of tolerances for quality assurance and the design of quality assurance procedures.

  11. The use of a gas chromatography-sensor system combined with advanced statistical methods, towards the diagnosis of urological malignancies

    PubMed Central

    Aggio, Raphael B. M.; de Lacy Costello, Ben; White, Paul; Khalid, Tanzeela; Ratcliffe, Norman M.; Persad, Raj; Probert, Chris S. J.

    2016-01-01

    Prostate cancer is one of the most common cancers. Serum prostate-specific antigen (PSA) is used to aid the selection of men undergoing biopsies. Its use remains controversial. We propose a GC-sensor algorithm system for classifying urine samples from patients with urological symptoms. This pilot study includes 155 men presenting to urology clinics, 58 were diagnosed with prostate cancer, 24 with bladder cancer and 73 with haematuria and or poor stream, without cancer. Principal component analysis (PCA) was applied to assess the discrimination achieved, while linear discriminant analysis (LDA) and support vector machine (SVM) were used as statistical models for sample classification. Leave-one-out cross-validation (LOOCV), repeated 10-fold cross-validation (10FoldCV), repeated double cross-validation (DoubleCV) and Monte Carlo permutations were applied to assess performance. Significant separation was found between prostate cancer and control samples, bladder cancer and controls and between bladder and prostate cancer samples. For prostate cancer diagnosis, the GC/SVM system classified samples with 95% sensitivity and 96% specificity after LOOCV. For bladder cancer diagnosis, the SVM reported 96% sensitivity and 100% specificity after LOOCV, while the DoubleCV reported 87% sensitivity and 99% specificity, with SVM showing 78% and 98% sensitivity between prostate and bladder cancer samples. Evaluation of the results of the Monte Carlo permutation of class labels obtained chance-like accuracy values around 50% suggesting the observed results for bladder cancer and prostate cancer detection are not due to over fitting. The results of the pilot study presented here indicate that the GC system is able to successfully identify patterns that allow classification of urine samples from patients with urological cancers. An accurate diagnosis based on urine samples would reduce the number of negative prostate biopsies performed, and the frequency of surveillance cystoscopy for bladder cancer patients. Larger cohort studies are planned to investigate the potential of this system. Future work may lead to non-invasive breath analyses for diagnosing urological conditions. PMID:26865331

  12. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    NASA Astrophysics Data System (ADS)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  13. A novel quantitative analysis method of three-dimensional fluorescence spectra for vegetable oils contents in edible blend oil

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei

    2015-04-01

    Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.

  14. Modeling active capping efficacy. 1. Metal and organometal contaminated sediment remediation.

    PubMed

    Viana, Priscilla Z; Yin, Ke; Rockne, Karl J

    2008-12-01

    Cd, Cr, Pb, Ag, As, Ba, Hg, CH3Hg, and CN transport through sand, granular activated carbon (GAC), organoclay, shredded tires, and apatite caps was modeled by deterministic and Monte Carlo methods. Time to 10% breakthrough, 30 and 100 yr cumulative release were metrics of effectiveness. Effective caps prevented above-cap concentrations from exceeding USEPA acute criteria at 100 yr assuming below-cap concentrations at solubility. Sand caps performed best under diffusion due to the greater diffusive path length. Apatite had the best advective performance for Cd, Cr, and Pb. Organoclay performed best for Ag, As, Ba, CH3Hg, and CN. Organoclay and apatite were equally effective for Hg. Monte Carlo analysis was used to determine output sensitivity. Sand was effective under diffusion for Cr within the 50% confidence interval (CI), for Cd and Pb (75% CI), and for As, Hg, and CH3Hg (95% CI). Under diffusion and advection, apatite was effective for Cd, Pb, and Hg (75% CI) and organoclay was effective for Hg and CH3Hg (50% CI). GAC and shredded tires performed relatively poorly. Although no single cap is a panacea, apatite and organoclay have the broadest range of effectiveness. Cap performance is most sensitive to the partitioning coefficient and hydraulic conductivity, indicating the importance of accurate site-specific measurement for these parameters.

  15. Landslide Failure Likelihoods Estimated Through Analysis of Suspended Sediment and Streamflow Time Series Data

    NASA Astrophysics Data System (ADS)

    Stark, C. P.; Rudd, S.; Lall, U.; Hovius, N.; Dadson, S.; Chen, M.-C.

    Off-Axis DOAS measurements with non-artificial scattered light, based upon the renowned DOAS technique, allow to optimize the sensitivity of the technique for the trace gas profile in question by strongly increasing the light's path through the relevant atmosphere layers. Multi-Axis-(MAX) DOAS probe several directions simultaneously or sequentially to increase the spatial resolution. Several devices (ground based, air- borne and ship-built) are operated by our group in the framework of the SCIAMACHY validation. Radiative transfer models are an essential requirement for the interpretation of these measurements and their conversion into detailed profile data. Apart from some existing Monte Carlo Models most codes use analytical algorithms to solve the radia- tive transfer equation for given atmospheric conditions. For specific circumstances, e.g. photon scattering within clouds, these approaches are not efficient enough to pro- vide sufficient accuracy. Also horizontal gradients in atmospheric parameters have to be taken into account. To meet the needs of measurement situations for all kinds of scattered light DOAS platforms, a three dimensional full spherical Monte Carlo model was devised. Here we present Air Mass Factors (AMF) to calculate vertical column densities (VCD) from measured slant column densities (SCD). Sensitivity studies on the influence of the wavelength and telescope direction used, of the altitude of profile layers, albedo, refraction and basic aerosols are shown. Also modelled intensity series are compared with radiometer data.

  16. Water quality modeling for urban reach of Yamuna river, India (1999-2009), using QUAL2Kw

    NASA Astrophysics Data System (ADS)

    Sharma, Deepshikha; Kansal, Arun; Pelletier, Greg

    2017-06-01

    The study was to characterize and understand the water quality of the river Yamuna in Delhi (India) prior to an efficient restoration plan. A combination of collection of monitored data, mathematical modeling, sensitivity, and uncertainty analysis has been done using the QUAL2Kw, a river quality model. The model was applied to simulate DO, BOD, total coliform, and total nitrogen at four monitoring stations, namely Palla, Old Delhi Railway Bridge, Nizamuddin, and Okhla for 10 years (October 1999-June 2009) excluding the monsoon seasons (July-September). The study period was divided into two parts: monthly average data from October 1999-June 2004 (45 months) were used to calibrate the model and monthly average data from October 2005-June 2009 (45 months) were used to validate the model. The R2 for CBODf and TN lies within the range of 0.53-0.75 and 0.68-0.83, respectively. This shows that the model has given satisfactory results in terms of R2 for CBODf, TN, and TC. Sensitivity analysis showed that DO, CBODf, TN, and TC predictions are highly sensitive toward headwater flow and point source flow and quality. Uncertainty analysis using Monte Carlo showed that the input data have been simulated in accordance with the prevalent river conditions.

  17. Angular correlations in the prompt neutron emission in spontaneous fission of 252Cf

    NASA Astrophysics Data System (ADS)

    Kopatch, Yuri; Chietera, Andreina; Stuttgé, Louise; Gönnenwein, Friedrich; Mutterer, Manfred; Gagarski, Alexei; Guseva, Irina; Dorvaux, Olivier; Hanappe, Francis; Hambsch, Franz-Josef

    2017-09-01

    An experiment aiming at the detailed investigation of angular correlations in the neutron emission from spontaneous fission of 252Cf has been performed at IPHC Strasbourg using the angle-sensitive double ionization chamber CODIS for measuring fission fragments and a set of 60 DEMON scintillator counters for neutron detection. The main aim of the experiment is to search for an anisotropy of neutron emission in the center-of-mass system of the fragments. The present status of the data analysis and the full Monte-Carlo simulation of the experiment are reported in the present paper.

  18. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  19. Introducing uncertainty analysis of nucleation and crystal growth models in Process Analytical Technology (PAT) system design of crystallization processes.

    PubMed

    Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul

    2013-11-01

    This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Fusion-neutron-yield, activation measurements at the Z accelerator: design, analysis, and sensitivity.

    PubMed

    Hahn, K D; Cooper, G W; Ruiz, C L; Fehl, D L; Chandler, G A; Knapp, P F; Leeper, R J; Nelson, A J; Smelser, R M; Torres, J A

    2014-04-01

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r(2) decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm(2) and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.

  1. Supercritical Quasi-Conduction States in Stochastic Rayleigh-Benard Convection

    DTIC Science & Technology

    2011-09-15

    is 10 (see table 1). The sensitivity (in the sense of Sobol [39]) of the integrated Nusselt number with respect to the amplitude of the boundary...using a multi-element quadrature formula [32]. Following Sobol [39], we shall define global sensitivity indices as the ratio between the variance of...39] I. M. Sobol , Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates, Math. Comput. Simul. 55 (2001) 271

  2. Sensitivity analysis and uncertainty estimation in ash concentration simulations and tephra deposit daily forecasted at Mt. Etna, in Italy

    NASA Astrophysics Data System (ADS)

    Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano

    2015-04-01

    The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.

  3. The use of atmospheric measurements to constrain model predictions of ozone change from chlorine perturbations

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Stolarski, Richard S.

    1987-01-01

    Atmospheric photochemistry models have been used to predict the sensitivity of the ozone layer to various perturbations. These same models also predict concentrations of chemical species in the present day atmosphere which can be compared to observations. Model results for both present day values and sensitivity to perturbation depend upon input data for reaction rates, photodissociation rates, and boundary conditions. A method of combining the results of a Monte Carlo uncertainty analysis with the existing set of present atmospheric species measurements is developed. The method is used to examine the range of values for the sensitivity of ozone to chlorine perturbations that is possible within the currently accepted ranges for input data. It is found that model runs which predict ozone column losses much greater than 10 percent as a result of present fluorocarbon fluxes produce concentrations and column amounts in the present atmosphere which are inconsistent with the measurements for ClO, HCl, NO, NO2, and HNO3.

  4. Minerva exoplanet detection sensitivity from simulated observations

    NASA Astrophysics Data System (ADS)

    McCrady, Nate; Nava, C.

    2014-01-01

    Small rocky planets induce radial velocity signals that are difficult to detect in the presence of stellar noise sources of comparable or larger amplitude. Minerva is a dedicated, robotic observatory that will attain 1 meter per second precision to detect these rocky planets in the habitable zone around nearby stars. We present results of an ongoing project investigating Minerva’s planet detection sensitivity as a function of observational cadence, planet mass, and orbital parameters (period, eccentricity, and argument of periastron). Radial velocity data is simulated with realistic observing cadence, accounting for weather patterns at Mt. Hopkins, Arizona. Instrumental and stellar noise are added to the simulated observations, including effects of oscillation, jitter, starspots and rotation. We extract orbital parameters from the simulated RV data using the RVLIN code. A Monte Carlo analysis is used to explore the parameter space and evaluate planet detection completeness. Our results will inform the Minerva observing strategy by providing a quantitative measure of planet detection sensitivity as a function of orbital parameters and cadence.

  5. Methodology for Collision Risk Assessment of an Airspace Flow Corridor Concept

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    This dissertation presents a methodology to estimate the collision risk associated with a future air-transportation concept called the flow corridor. The flow corridor is a Next Generation Air Transportation System (NextGen) concept to reduce congestion and increase throughput in en-route airspace. The flow corridor has the potential to increase throughput by reducing the controller workload required to manage aircraft outside the corridor and by reducing separation of aircraft within corridor. The analysis in this dissertation is a starting point for the safety analysis required by the Federal Aviation Administration (FAA) to eventually approve and implement the corridor concept. This dissertation develops a hybrid risk analysis methodology that combines Monte Carlo simulation with dynamic event tree analysis. The analysis captures the unique characteristics of the flow corridor concept, including self-separation within the corridor, lane change maneuvers, speed adjustments, and the automated separation assurance system. Monte Carlo simulation is used to model the movement of aircraft in the flow corridor and to identify precursor events that might lead to a collision. Since these precursor events are not rare, standard Monte Carlo simulation can be used to estimate these occurrence rates. Dynamic event trees are then used to model the subsequent series of events that may lead to collision. When two aircraft are on course for a near-mid-air collision (NMAC), the on-board automated separation assurance system provides a series of safety layers to prevent the impending NNAC or collision. Dynamic event trees are used to evaluate the potential failures of these layers in order to estimate the rare-event collision probabilities. The results show that the throughput can be increased by reducing separation to 2 nautical miles while maintaining the current level of safety. A sensitivity analysis shows that the most critical parameters in the model related to the overall collision probability are the minimum separation, the probability that both flights fail to respond to traffic collision avoidance system, the probability that an NMAC results in a collision, the failure probability of the automatic dependent surveillance broadcast in receiver, and the conflict detection probability.

  6. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  7. A Computational Framework for Identifiability and Ill-Conditioning Analysis of Lithium-Ion Battery Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López C, Diana C.; Wozny, Günter; Flores-Tlacuahuac, Antonio

    2016-03-23

    The lack of informative experimental data and the complexity of first-principles battery models make the recovery of kinetic, transport, and thermodynamic parameters complicated. We present a computational framework that combines sensitivity, singular value, and Monte Carlo analysis to explore how different sources of experimental data affect parameter structural ill conditioning and identifiability. Our study is conducted on a modified version of the Doyle-Fuller-Newman model. We demonstrate that the use of voltage discharge curves only enables the identification of a small parameter subset, regardless of the number of experiments considered. Furthermore, we show that the inclusion of a single electrolyte concentrationmore » measurement significantly aids identifiability and mitigates ill-conditioning.« less

  8. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  9. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  10. Cost-effectiveness of supervised exercise therapy in heart failure patients.

    PubMed

    Kühr, Eduardo M; Ribeiro, Rodrigo A; Rohde, Luis Eduardo P; Polanczyk, Carisi A

    2011-01-01

    Exercise therapy in heart failure (HF) patients is considered safe and has demonstrated modest reduction in hospitalization rates and death in recent trials. Previous cost-effectiveness analysis described favorable results considering long-term supervised exercise intervention and significant effectiveness of exercise therapy; however, these evidences are now no longer supported. To evaluate the cost-effectiveness of supervised exercise therapy in HF patients under the perspective of the Brazilian Public Healthcare System. We developed a Markov model to evaluate the incremental cost-effectiveness ratio of supervised exercise therapy compared to standard treatment in patients with New York Heart Association HF class II and III. Effectiveness was evaluated in quality-adjusted life years in a 10-year time horizon. We searched PUBMED for published clinical trials to estimate effectiveness, mortality, hospitalization, and utilities data. Treatment costs were obtained from published cohort updated to 2008 values. Exercise therapy intervention costs were obtained from a rehabilitation center. Model robustness was assessed through Monte Carlo simulation and sensitivity analysis. Cost were expressed as international dollars, applying the purchasing-power-parity conversion rate. Exercise therapy showed small reduction in hospitalization and mortality at a low cost, an incremental cost-effectiveness ratio of Int$26,462/quality-adjusted life year. Results were more sensitive to exercise therapy costs, standard treatment total costs, exercise therapy effectiveness, and medications costs. Considering a willingness-to-pay of Int$27,500, 55% of the trials fell below this value in the Monte Carlo simulation. In a Brazilian scenario, exercise therapy shows reasonable cost-effectiveness ratio, despite current evidence of limited benefit of this intervention. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Cost-effectiveness of angiographic imaging in isolated perimesencephalic subarachnoid hemorrhage.

    PubMed

    Kalra, Vivek B; Wu, Xiao; Forman, Howard P; Malhotra, Ajay

    2014-12-01

    The purpose of this study is to perform a comprehensive cost-effectiveness analysis of all possible permutations of computed tomographic angiography (CTA) and digital subtraction angiography imaging strategies for both initial diagnosis and follow-up imaging in patients with perimesencephalic subarachnoid hemorrhage on noncontrast CT. Each possible imaging strategy was evaluated in a decision tree created with TreeAge Pro Suite 2014, with parameters derived from a meta-analysis of 40 studies and literature values. Base case and sensitivity analyses were performed to assess the cost-effectiveness of each strategy. A Monte Carlo simulation was conducted with distributional variables to evaluate the robustness of the optimal strategy. The base case scenario showed performing initial CTA with no follow-up angiographic studies in patients with perimesencephalic subarachnoid hemorrhage to be the most cost-effective strategy ($5422/quality adjusted life year). Using a willingness-to-pay threshold of $50 000/quality adjusted life year, the most cost-effective strategy based on net monetary benefit is CTA with no follow-up when the sensitivity of initial CTA is >97.9%, and CTA with CTA follow-up otherwise. The Monte Carlo simulation reported CTA with no follow-up to be the optimal strategy at willingness-to-pay of $50 000 in 99.99% of the iterations. Digital subtraction angiography, whether at initial diagnosis or as part of follow-up imaging, is never the optimal strategy in our model. CTA without follow-up imaging is the optimal strategy for evaluation of patients with perimesencephalic subarachnoid hemorrhage when modern CT scanners and a strict definition of perimesencephalic subarachnoid hemorrhage are used. Digital subtraction angiography and follow-up imaging are not optimal as they carry complications and associated costs. © 2014 American Heart Association, Inc.

  12. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  13. SU-E-T-525: Ionization Chamber Perturbation in Flattening Filter Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czarnecki, D; Voigts-Rhetz, P von; Zink, K

    2015-06-15

    Purpose: Changing the characteristic of a photon beam by mechanically removing the flattening filter may impact the dose response of ionization chambers. Thus, perturbation factors of cylindrical ionization chambers in conventional and flattening filter free photon beams were calculated by Monte Carlo simulations. Methods: The EGSnrc/BEAMnrc code system was used for all Monte Carlo calculations. BEAMnrc models of nine different linear accelerators with and without flattening filter were used to create realistic photon sources. Monte Carlo based calculations to determine the fluence perturbations due to the presens of the chambers components, the different material of the sensitive volume (air insteadmore » of water) as well as the volume effect were performed by the user code egs-chamber. Results: Stem, central electrode, wall, density and volume perturbation factors for linear accelerators with and without flattening filter were calculated as a function of the beam quality specifier TPR{sub 20/10}. A bias between the perturbation factors as a function of TPR{sub 20/10} for flattening filter free beams and conventional linear accelerators could not be observed for the perturbations caused by the components of the ionization chamber and the sensitive volume. Conclusion: The results indicate that the well-known small bias between the beam quality correction factor as a function of TPR20/10 for the flattening filter free and conventional linear accelerators is not caused by the geometry of the detector but rather by the material of the sensitive volume. This suggest that the bias for flattening filter free photon fields is only caused by the different material of the sensitive volume (air instead of water)« less

  14. Monte Carlo simulation of the spatial resolution and depth sensitivity of two-dimensional optical imaging of the brain

    PubMed Central

    Tian, Peifang; Devor, Anna; Sakadžić, Sava; Dale, Anders M.; Boas, David A.

    2011-01-01

    Absorption or fluorescence-based two-dimensional (2-D) optical imaging is widely employed in functional brain imaging. The image is a weighted sum of the real signal from the tissue at different depths. This weighting function is defined as “depth sensitivity.” Characterizing depth sensitivity and spatial resolution is important to better interpret the functional imaging data. However, due to light scattering and absorption in biological tissues, our knowledge of these is incomplete. We use Monte Carlo simulations to carry out a systematic study of spatial resolution and depth sensitivity for 2-D optical imaging methods with configurations typically encountered in functional brain imaging. We found the following: (i) the spatial resolution is <200 μm for NA ≤0.2 or focal plane depth ≤300 μm. (ii) More than 97% of the signal comes from the top 500 μm of the tissue. (iii) For activated columns with lateral size larger than spatial resolution, changing numerical aperature (NA) and focal plane depth does not affect depth sensitivity. (iv) For either smaller columns or large columns covered by surface vessels, increasing NA and∕or focal plane depth may improve depth sensitivity at deeper layers. Our results provide valuable guidance for the optimization of optical imaging systems and data interpretation. PMID:21280912

  15. Sensitivity of an Elekta iView GT a-Si EPID model to delivery errors for pre-treatment verification of IMRT fields.

    PubMed

    Herwiningsih, Sri; Hanlon, Peta; Fielding, Andrew

    2014-12-01

    A Monte Carlo model of an Elekta iViewGT amorphous silicon electronic portal imaging device (a-Si EPID) has been validated for pre-treatment verification of clinical IMRT treatment plans. The simulations involved the use of the BEAMnrc and DOSXYZnrc Monte Carlo codes to predict the response of the iViewGT a-Si EPID model. The predicted EPID images were compared to the measured images obtained from the experiment. The measured EPID images were obtained by delivering a photon beam from an Elekta Synergy linac to the Elekta iViewGT a-Si EPID. The a-Si EPID was used with no additional build-up material. Frame averaged EPID images were acquired and processed using in-house software. The agreement between the predicted and measured images was analyzed using the gamma analysis technique with acceptance criteria of 3 %/3 mm. The results show that the predicted EPID images for four clinical IMRT treatment plans have a good agreement with the measured EPID signal. Three prostate IMRT plans were found to have an average gamma pass rate of more than 95.0 % and a spinal IMRT plan has the average gamma pass rate of 94.3 %. During the period of performing this work a routine MLC calibration was performed and one of the IMRT treatments re-measured with the EPID. A change in the gamma pass rate for one field was observed. This was the motivation for a series of experiments to investigate the sensitivity of the method by introducing delivery errors, MLC position and dosimetric overshoot, into the simulated EPID images. The method was found to be sensitive to 1 mm leaf position errors and 10 % overshoot errors.

  16. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  17. Monte Carlo calculations of LR115 detector response to 222Rn in the presence of 220Rn.

    PubMed

    Nikezić, D; Yu, K N

    2000-04-01

    The sensitivities (in m) of bare LR115 detectors and detectors in diffusion chambers to 222Rn and 220Rn chains are calculated by the Monte Carlo method. The partial sensitivities of bare detectors to the 222Rn chain are larger than those to the 220Rn chain, which is due to the higher energies of alpha particles in the 220Rn chain and the upper energy limit for detection for the LR115 detector. However, the total sensitivities are approximately equal because 220Rn is always in equilibrium with its first progeny, which is not the case for the 222Rn chain. The total sensitivity of bare LR115 detectors to 222Rn chain depends linearly on the equilibrium factor. The overestimation in 222Rn measurements with bare detectors caused by 220Rn in air can reach 10% in normal environmental conditions. An analytical relationship between the equilibrium factor and the ratio between track densities on the bare detector and the detector enclosed in chamber is given in the last part of the paper. This ratio is also affected by 220Rn, which can disturb the determination of the equilibrium factor.

  18. Design of a transportable high efficiency fast neutron spectrometer

    DOE PAGES

    Roecker, C.; Bernstein, A.; Bowden, N. S.; ...

    2016-04-12

    A transportable fast neutron detection system has been designed and constructed for measuring neutron energy spectra and flux ranging from tens to hundreds of MeV. The transportability of the spectrometer reduces the detector-related systematic bias between different neutron spectra and flux measurements, which allows for the comparison of measurements above or below ground. The spectrometer will measure neutron fluxes that are of prohibitively low intensity compared to the site-specific background rates targeted by other transportable fast neutron detection systems. To measure low intensity high-energy neutron fluxes, a conventional capture-gating technique is used for measuring neutron energies above 20 MeV andmore » a novel multiplicity technique is used for measuring neutron energies above 100 MeV. The spectrometer is composed of two Gd containing plastic scintillator detectors arranged around a lead spallation target. To calibrate and characterize the position dependent response of the spectrometer, a Monte Carlo model was developed and used in conjunction with experimental data from gamma ray sources. Multiplicity event identification algorithms were developed and used with a Cf-252 neutron multiplicity source to validate the Monte Carlo model Gd concentration and secondary neutron capture efficiency. The validated Monte Carlo model was used to predict an effective area for the multiplicity and capture gating analyses. For incident neutron energies between 100 MeV and 1000 MeV with an isotropic angular distribution, the multiplicity analysis predicted an effective area of 500 cm 2 rising to 5000 cm 2. For neutron energies above 20 MeV, the capture-gating analysis predicted an effective area between 1800 cm 2 and 2500 cm 2. As a result, the multiplicity mode was found to be sensitive to the incident neutron angular distribution.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roecker, C.; Bernstein, A.; Bowden, N. S.

    A transportable fast neutron detection system has been designed and constructed for measuring neutron energy spectra and flux ranging from tens to hundreds of MeV. The transportability of the spectrometer reduces the detector-related systematic bias between different neutron spectra and flux measurements, which allows for the comparison of measurements above or below ground. The spectrometer will measure neutron fluxes that are of prohibitively low intensity compared to the site-specific background rates targeted by other transportable fast neutron detection systems. To measure low intensity high-energy neutron fluxes, a conventional capture-gating technique is used for measuring neutron energies above 20 MeV andmore » a novel multiplicity technique is used for measuring neutron energies above 100 MeV. The spectrometer is composed of two Gd containing plastic scintillator detectors arranged around a lead spallation target. To calibrate and characterize the position dependent response of the spectrometer, a Monte Carlo model was developed and used in conjunction with experimental data from gamma ray sources. Multiplicity event identification algorithms were developed and used with a Cf-252 neutron multiplicity source to validate the Monte Carlo model Gd concentration and secondary neutron capture efficiency. The validated Monte Carlo model was used to predict an effective area for the multiplicity and capture gating analyses. For incident neutron energies between 100 MeV and 1000 MeV with an isotropic angular distribution, the multiplicity analysis predicted an effective area of 500 cm 2 rising to 5000 cm 2. For neutron energies above 20 MeV, the capture-gating analysis predicted an effective area between 1800 cm 2 and 2500 cm 2. As a result, the multiplicity mode was found to be sensitive to the incident neutron angular distribution.« less

  20. School-Based Influenza Vaccination: Health and Economic Impact of Maine's 2009 Influenza Vaccination Program.

    PubMed

    Basurto-Dávila, Ricardo; Meltzer, Martin I; Mills, Dora A; Beeler Asay, Garrett R; Cho, Bo-Hyun; Graitcer, Samuel B; Dube, Nancy L; Thompson, Mark G; Patel, Suchita A; Peasah, Samuel K; Ferdinands, Jill M; Gargiullo, Paul; Messonnier, Mark; Shay, David K

    2017-12-01

    To estimate the societal economic and health impacts of Maine's school-based influenza vaccination (SIV) program during the 2009 A(H1N1) influenza pandemic. Primary and secondary data covering the 2008-09 and 2009-10 influenza seasons. We estimated weekly monovalent influenza vaccine uptake in Maine and 15 other states, using difference-in-difference-in-differences analysis to assess the program's impact on immunization among six age groups. We also developed a health and economic Markov microsimulation model and conducted Monte Carlo sensitivity analysis. We used national survey data to estimate the impact of the SIV program on vaccine coverage. We used primary data and published studies to develop the microsimulation model. The program was associated with higher immunization among children and lower immunization among adults aged 18-49 years and 65 and older. The program prevented 4,600 influenza infections and generated $4.9 million in net economic benefits. Cost savings from lower adult vaccination accounted for 54 percent of the economic gain. Economic benefits were positive in 98 percent of Monte Carlo simulations. SIV may be a cost-beneficial approach to increase immunization during pandemics, but programs should be designed to prevent lower immunization among nontargeted groups. © Health Research and Educational Trust.

  1. STEM Educators' Integration of Formative Assessment in Teaching and Lesson Design

    NASA Astrophysics Data System (ADS)

    Moreno, Kimberly A.

    Air-breathing hypersonic vehicles, when fully developed, will offer travel in the atmosphere at unprecendented speeds. Capturing their physical behavior by analytical / numerical models is still a major challenge, still limiting the development of controls technology for such vehicles. To study, in an exploratory manner, active control of air-breathing hypersonic vehicles, an analtical, simplified, model of a generic hypersonic air-breathing vehicle in flight was developed by researchers at the Air Force Research Labs in Dayton, Ohio, along with control laws. Elevator deflection and fuel-to-air ratio were used as inputs. However, that model is very approximate, and the field of hypersonics still faces many unknowns. This thesis contributes to the study of control of air-breating hypersonic vehicles in a number of ways: First, regarding control laws synthesis, optimal gains are chosen for the previously developed control law alongside an alternate control law modified from existing literature by minimizing the Lyapunov function derivative using Monte Carlo simulation. This is followed by analysis of the robustness of the control laws in the face of system parametric uncertainties using Monte Carlo simulations. The resulting statistical distributions of the commanded response are analyzed, and linear regression is used to determine, via sensitivity analysis, which uncertain parameters have the largest impact on the desired outcome.

  2. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.

  3. A conflict analysis of 4D descent strategies in a metered, multiple-arrival route environment

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Harris, C. S.

    1990-01-01

    A conflict analysis was performed on multiple arrival traffic at a typical metered airport. The Flow Management Evaluation Model (FMEM) was used to simulate arrival operations using Denver Stapleton's arrival route structure. Sensitivities of conflict performance to three different 4-D descent strategies (clear-idle Mach/Constant AirSpeed (CAS), constant descent angle Mach/CAS and energy optimal) were examined for three traffic mixes represented by those found at Denver Stapleton, John F. Kennedy and typical en route metering (ERM) airports. The Monte Carlo technique was used to generate simulation entry point times. Analysis results indicate that the clean-idle descent strategy offers the best compromise in overall performance. Performance measures primarily include susceptibility to conflict and conflict severity. Fuel usage performance is extrapolated from previous descent strategy studies.

  4. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  5. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  6. Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Contessi, L.; Lovato, A.; Pederiva, F.

    Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less

  7. Modeling leaching of viruses by the Monte Carlo method.

    PubMed

    Faulkner, Barton R; Lyon, William G; Khan, Faruque A; Chattopadhyay, Sandip

    2003-11-01

    A predictive screening model was developed for fate and transport of viruses in the unsaturated zone by applying the final value theorem of Laplace transformation to previously developed governing equations. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very small, but finite probabilities of failure for all homogeneous USDA classified soils to attenuate reovirus 3 by 99.99% in one-half meter of gravity drainage. The logarithm of saturated hydraulic conductivity and water to air-water interface mass transfer coefficient affected virus fate and transport about 3 times more than any other parameter, including the logarithm of inactivation rate of suspended viruses. Model results suggest extreme infiltration events may play a predominant role in leaching of viruses in soils, since such events could impact hydraulic conductivity. The air-water interface also appears to play a predominating role in virus transport and fate. Although predictive modeling may provide insight into actual attenuation of viruses, hydrogeologic sensitivity assessments for the unsaturated zone should include a sampling program.

  8. Inventory Uncertainty Quantification using TENDL Covariance Data in Fispact-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastwood, J.W.; Morgan, J.G.; Sublet, J.-Ch., E-mail: jean-christophe.sublet@ccfe.ac.uk

    2015-01-15

    The new inventory code Fispact-II provides predictions of inventory, radiological quantities and their uncertainties using nuclear data covariance information. Central to the method is a novel fast pathways search algorithm using directed graphs. The pathways output provides (1) an aid to identifying important reactions, (2) fast estimates of uncertainties, (3) reduced models that retain important nuclides and reactions for use in the code's Monte Carlo sensitivity analysis module. Described are the methods that are being implemented for improving uncertainty predictions, quantification and propagation using the covariance data that the recent nuclear data libraries contain. In the TENDL library, above themore » upper energy of the resolved resonance range, a Monte Carlo method in which the covariance data come from uncertainties of the nuclear model calculations is used. The nuclear data files are read directly by FISPACT-II without any further intermediate processing. Variance and covariance data are processed and used by FISPACT-II to compute uncertainties in collapsed cross sections, and these are in turn used to predict uncertainties in inventories and all derived radiological data.« less

  9. Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT

    DOE PAGES

    Contessi, L.; Lovato, A.; Pederiva, F.; ...

    2017-07-26

    Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less

  10. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    DOE PAGES

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less

  11. Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation.

    PubMed

    Garcia, Alejandro L; Wagner, Wolfgang

    2003-11-01

    In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution.

  12. Modelling ecological and human exposure to POPs in Venice lagoon - Part II: Quantitative uncertainty and sensitivity analysis in coupled exposure models.

    PubMed

    Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio

    2016-11-01

    The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Levonorgestrel Intrauterine Device as an Endometrial Cancer Prevention Strategy in Obese Women: A Cost-Effectiveness Analysis.

    PubMed

    Dottino, Joseph A; Hasselblad, Vic; Secord, Angeles Alvarez; Myers, Evan R; Chino, Junzo; Havrilesky, Laura J

    2016-10-01

    To estimate the cost-effectiveness of the levonorgestrel intrauterine device (IUD) as an endometrial cancer prevention strategy in obese women. A modified Markov model was used to compare IUD placement at age 50 with usual care among women with a body mass index (BMI, kg/m) 40 or greater or BMI 30 or greater. The effects of obesity on incidence and survival were incorporated. The IUD was assumed to confer a 50% reduction in cancer incidence over 5 years. Costs of IUD and cancer care were included. Clinical outcomes were cancer diagnosis and deaths from cancer. Incremental cost-effectiveness ratios were calculated in 2015 U.S. dollars per year of life saved. One-way and two-way sensitivity analyses and Monte Carlo probabilistic analyses were performed. For a 50 year old with BMI 40 or greater, the IUD strategy is costlier and more effective than usual care with an incremental cost-effectiveness ratio of $74,707 per year of life saved. If the protective effect of the levonorgestrel IUD is assumed to be 10 years, the incremental cost-effectiveness ratio decreases to $37,858 per year of life saved. In sensitivity analysis, a levonorgestrel IUD that reduces cancer incidence by at least 68% in women with BMIs of 40 or greater or costs less than $500 is potentially cost-effective. For BMI 30 or greater, the incremental cost-effectiveness ratio of IUD strategy is $137,223 per year of life saved compared with usual care. In Monte Carlo analysis, IUD placement for BMI 40 or greater is cost-effective in 50% of simulations at a willingness-to-pay threshold of $100,000 per year of life saved. The levonorgestrel IUD is a potentially cost-effective strategy for prevention of deaths from endometrial cancer in obese women.

  14. Design optimization and probabilistic analysis of a hydrodynamic journal bearing

    NASA Technical Reports Server (NTRS)

    Liniecki, Alexander G.

    1990-01-01

    A nonlinear constrained optimization of a hydrodynamic bearing was performed yielding three main variables: radial clearance, bearing length to diameter ratio, and lubricating oil viscosity. As an objective function a combined model of temperature rise and oil supply has been adopted. The optimized model of the bearing has been simulated for population of 1000 cases using Monte Carlo statistical method. It appeared that the so called 'optimal solution' generated more than 50 percent of failed bearings, because their minimum oil film thickness violated stipulated minimum constraint value. As a remedy change of oil viscosity is suggested after several sensitivities of variables have been investigated.

  15. Pricing geometric Asian rainbow options under fractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Zhang, Rong; Yang, Lin; Su, Yang; Ma, Feng

    2018-03-01

    In this paper, we explore the pricing of the assets of Asian rainbow options under the condition that the assets have self-similar and long-range dependence characteristics. Based on the principle of no arbitrage, stochastic differential equation, and partial differential equation, we obtain the pricing formula for two-asset rainbow options under fractional Brownian motion. Next, our Monte Carlo simulation experiments show that the derived pricing formula is accurate and effective. Finally, our sensitivity analysis of the influence of important parameters, such as the risk-free rate, Hurst exponent, and correlation coefficient, on the prices of Asian rainbow options further illustrate the rationality of our pricing model.

  16. Tooth enamel dosimetric response to 2.8 MeV neutrons

    NASA Astrophysics Data System (ADS)

    Fattibene, P.; Angelone, M.; Pillon, M.; De Coste, V.

    2003-03-01

    Tooth enamel dosimetry, based on electron paramagnetic resonance (EPR) spectroscopy, is recognized as a powerful method for individual retrospective dose assessment. The method is mainly used for individual dose reconstruction in the epidemiological studies aimed at the radiation risk analysis. The study of the sensitivity of tooth enamel as a function of radiation quality is one of the main goals of the research in this field. In the present work, tooth enamel dose response in a monoenergetic neutron flux of 2.8 MeV, generated by the D-D reaction, was studied for in air and in phantom irradiations of enamel samples and of whole teeth. EPR measurements were complemented by Monte Carlo calculation and by gamma dose discrimination obtained with thermoluminescent and Geiger-Muller tube measurements. The 2.8 MeV neutrons to 60Co relative sensitivity was 0.33±0.08.

  17. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Ellis; Derek Gaston; Benoit Forget

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less

  18. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  19. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    ERIC Educational Resources Information Center

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  20. Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. G.; Normand, E.; Wilcox, A. D.

    1972-01-01

    In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.

  1. Three-Dimensional Finite Element Ablative Thermal Response and Thermostructural Design of Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Braun, Robert D.

    2011-01-01

    A finite element ablation and thermal response program is presented for simulation of three-dimensional transient thermostructural analysis. The three-dimensional governing differential equations and finite element formulation are summarized. A novel probabilistic design methodology for thermal protection systems is presented. The design methodology is an eight step process beginning with a parameter sensitivity study and is followed by a deterministic analysis whereby an optimum design can determined. The design process concludes with a Monte Carlo simulation where the probabilities of exceeding design specifications are estimated. The design methodology is demonstrated by applying the methodology to the carbon phenolic compression pads of the Crew Exploration Vehicle. The maximum allowed values of bondline temperature and tensile stress are used as the design specifications in this study.

  2. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  3. EMCCD calibration for astronomical imaging: Wide FastCam at the Telescopio Carlos Sánchez

    NASA Astrophysics Data System (ADS)

    Velasco, S.; Oscoz, A.; López, R. L.; Puga, M.; Pérez-Garrido, A.; Pallé, E.; Ricci, D.; Ayuso, I.; Hernández-Sánchez, M.; Vázquez-Martín, S.; Protasio, C.; Béjar, V.; Truant, N.

    2017-03-01

    The evident benefits of Electron Multiplying CCDs (EMCCDs) -speed, high sensitivity, low noise and their capability of detecting single photon events whilst maintaining high quantum efficiency- are bringing these kinds of detectors to many state-of-the-art astronomical instruments (Velasco et al. 2016; Oscoz et al. 2008). The EMCCDs are the perfect answer to the need for great sensitivity levels as they are not limited by the readout noise of the output amplifier, while conventional CCDs are, even when operated at high readout frame rates. Here we present a quantitative on-sky method to calibrate EMCCD detectors dedicated to astronomical imaging, developed during the commissioning process (Velasco et al. 2016) and first observations (Ricci et al. 2016, in prep.) with Wide FastCam (Marga et al. 2014) at Telescopio Carlos Sánchez (TCS) in the Observatorio del Teide.

  4. Structure sensitivity in oxide catalysis: First-principles kinetic Monte Carlo simulations for CO oxidation at RuO{sub 2}(111)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Tongyu; Reuter, Karsten, E-mail: karsten.reuter@ch.tum.de; SUNCAT Center for Interface Science and Catalysis, SLAC National Accelerator Laboratory and Stanford University, 443 Via Ortega, Stanford, California 94035-4300

    2015-11-28

    We present a density-functional theory based kinetic Monte Carlo study of CO oxidation at the (111) facet of RuO{sub 2}. We compare the detailed insight into elementary processes, steady-state surface coverages, and catalytic activity to equivalent published simulation data for the frequently studied RuO{sub 2}(110) facet. Qualitative differences are identified in virtually every aspect ranging from binding energetics over lateral interactions to the interplay of elementary processes at the different active sites. Nevertheless, particularly at technologically relevant elevated temperatures, near-ambient pressures and near-stoichiometric feeds both facets exhibit almost identical catalytic activity. These findings challenge the traditional definition of structure sensitivitymore » based on macroscopically observable turnover frequencies and prompt scrutiny of the applicability of structure sensitivity classifications developed for metals to oxide catalysis.« less

  5. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  6. Simulation of Satellite, Airborne and Terrestrial LiDAR with DART (I):Waveform Simulation with Quasi-Monte Carlo Ray Tracing

    NASA Technical Reports Server (NTRS)

    Gastellu-Etchegorry, Jean-Philippe; Yin, Tiangang; Lauret, Nicolas; Grau, Eloi; Rubio, Jeremy; Cook, Bruce D.; Morton, Douglas C.; Sun, Guoqing

    2016-01-01

    Light Detection And Ranging (LiDAR) provides unique data on the 3-D structure of atmosphere constituents and the Earth's surface. Simulating LiDAR returns for different laser technologies and Earth scenes is fundamental for evaluating and interpreting signal and noise in LiDAR data. Different types of models are capable of simulating LiDAR waveforms of Earth surfaces. Semi-empirical and geometric models can be imprecise because they rely on simplified simulations of Earth surfaces and light interaction mechanisms. On the other hand, Monte Carlo ray tracing (MCRT) models are potentially accurate but require long computational time. Here, we present a new LiDAR waveform simulation tool that is based on the introduction of a quasi-Monte Carlo ray tracing approach in the Discrete Anisotropic Radiative Transfer (DART) model. Two new approaches, the so-called "box method" and "Ray Carlo method", are implemented to provide robust and accurate simulations of LiDAR waveforms for any landscape, atmosphere and LiDAR sensor configuration (view direction, footprint size, pulse characteristics, etc.). The box method accelerates the selection of the scattering direction of a photon in the presence of scatterers with non-invertible phase function. The Ray Carlo method brings traditional ray-tracking into MCRT simulation, which makes computational time independent of LiDAR field of view (FOV) and reception solid angle. Both methods are fast enough for simulating multi-pulse acquisition. Sensitivity studies with various landscapes and atmosphere constituents are presented, and the simulated LiDAR signals compare favorably with their associated reflectance images and Laser Vegetation Imaging Sensor (LVIS) waveforms. The LiDAR module is fully integrated into DART, enabling more detailed simulations of LiDAR sensitivity to specific scene elements (e.g., atmospheric aerosols, leaf area, branches, or topography) and sensor configuration for airborne or satellite LiDAR sensors.

  7. Structural development and web service based sensitivity analysis of the Biome-BGC MuSo model

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Balogh, János; Churkina, Galina; Haszpra, László; Horváth, Ferenc; Ittzés, Péter; Ittzés, Dóra; Ma, Shaoxiu; Nagy, Zoltán; Pintér, Krisztina; Barcza, Zoltán

    2014-05-01

    Studying the greenhouse gas exchange, mainly the carbon dioxide sink and source character of ecosystems is still a highly relevant research topic in biogeochemistry. During the past few years research focused on managed ecosystems, because human intervention has an important role in the formation of the land surface through agricultural management, land use change, and other practices. In spite of considerable developments current biogeochemical models still have uncertainties to adequately quantify greenhouse gas exchange processes of managed ecosystem. Therefore, it is an important task to develop and test process-based biogeochemical models. Biome-BGC is a widely used, popular biogeochemical model that simulates the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems. Biome-BGC was originally developed by the Numerical Terradynamic Simulation Group (NTSG) of University of Montana (http://www.ntsg.umt.edu/project/biome-bgc), and several other researchers used and modified it in the past. Our research group developed Biome-BGC version 4.1.1 to improve essentially the ability of the model to simulate carbon and water cycle in real managed ecosystems. The modifications included structural improvements of the model (e.g., implementation of multilayer soil module and drought related plant senescence; improved model phenology). Beside these improvements management modules and annually varying options were introduced and implemented (simulate mowing, grazing, planting, harvest, ploughing, application of fertilizers, forest thinning). Dynamic (annually varying) whole plant mortality was also enabled in the model to support more realistic simulation of forest stand development and natural disturbances. In the most recent model version separate pools have been defined for fruit. The model version which contains every former and new development is referred as Biome-BGC MuSo (Biome-BGC with multi-soil layer). Within the frame of the BioVeL project (http://www.biovel.eu) an open source and domain independent scientific workflow management system (http://www.taverna.org.uk) are used to support 'in silico' experimentation and easy applicability of different models including Biome-BGC MuSo. Workflows can be built upon functionally linked sets of web services like retrieval of meteorological dataset and other parameters; preparation of single run or spatial run model simulation; desk top grid technology based Monte Carlo experiment with parallel processing; model sensitivity analysis, etc. The newly developed, Monte Carlo experiment based sensitivity analysis is described in this study and results are presented about differences in the sensitivity of the original and the developed Biome-BGC model.

  8. Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, J E; Fratoni, M; Kramer, K J

    A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less

  9. Monte Carlo simulations versus experimental measurements in a small animal PET system. A comparison in the NEMA NU 4-2008 framework

    NASA Astrophysics Data System (ADS)

    Popota, F. D.; Aguiar, P.; España, S.; Lois, C.; Udias, J. M.; Ros, D.; Pavia, J.; Gispert, J. D.

    2015-01-01

    In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system’s sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system’s dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.

  10. Monte Carlo simulations versus experimental measurements in a small animal PET system. A comparison in the NEMA NU 4-2008 framework.

    PubMed

    Popota, F D; Aguiar, P; España, S; Lois, C; Udias, J M; Ros, D; Pavia, J; Gispert, J D

    2015-01-07

    In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system's sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system's dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.

  11. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less

  12. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Ning, Chao-lie; Li, Bing

    2017-03-01

    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  13. Investing in a robotic milking system: a Monte Carlo simulation analysis.

    PubMed

    Hyde, J; Engel, P

    2002-09-01

    This paper uses Monte Carlo simulation methods to estimate the breakeven value for a robotic milking system (RMS) on a dairy farm in the United States. The breakeven value indicates the maximum amount that could be paid for the robots given the costs of alternative milking equipment and other important factors (e.g., milk yields, prices, length of useful life of technologies). The analysis simulates several scenarios under three herd sizes, 60, 120, and 180 cows. The base-case results indicate that the mean breakeven values are $192,056, $374,538, and $553,671 for each of the three progressively larger herd sizes. These must be compared to the per-unit RMS cost (about $125,000 to $150,000) and the cost of any construction or installation of other equipment that accompanies the RMS. Sensitivity analysis shows that each additional dollar spent on milking labor in the parlor increases the breakeven value by $4.10 to $4.30. Each dollar increase in parlor costs increases the breakeven value by $0.45 to $0.56. Also, each additional kilogram of initial milk production (under a 2x system in the parlor) decreases the breakeven by $9.91 to $10.64. Finally, each additional year of useful life for the RMS increases the per-unit breakeven by about $16,000 while increasing the life of the parlor by 1 yr decreases the breakeven value by between $5,000 and $6,000.

  14. Cost-Effectiveness Analysis of Microscopic and Endoscopic Transsphenoidal Surgery Versus Medical Therapy in the Management of Microprolactinoma in the United States.

    PubMed

    Jethwa, Pinakin R; Patel, Tapan D; Hajart, Aaron F; Eloy, Jean Anderson; Couldwell, William T; Liu, James K

    2016-03-01

    Although prolactinomas are treated effectively with dopamine agonists, some have proposed curative surgical resection for select cases of microprolactinomas to avoid life-long medical therapy. We performed a cost-effectiveness analysis comparing transsphenoidal surgery (either microsurgical or endoscopic) and medical therapy (either bromocriptine or cabergoline) with decision analysis modeling. A 2-armed decision tree was created with TreeAge Pro Suite 2012 to compare upfront transsphenoidal surgery versus medical therapy. The economic perspective was that of the health care third-party payer. On the basis of a literature review, we assigned plausible distributions for costs and utilities to each potential outcome, taking into account medical and surgical costs and complications. Base-case analysis, sensitivity analysis, and Monte Carlo simulations were performed to determine the cost-effectiveness of each strategy at 5-year and 10-year time horizons. In the base-case scenario, microscopic transsphenoidal surgery was the most cost-effective option at 5 years from the time of diagnosis; however, by the 10-year time horizon, endoscopic transsphenoidal surgery became the most cost-effective option. At both time horizons, medical therapy (both bromocriptine and cabergoline) were found to be more costly and less effective than transsphenoidal surgery (i.e., the medical arm was dominated by the surgical arm in this model). Two-way sensitivity analysis demonstrated that endoscopic resection would be the most cost-effective strategy if the cure rate from endoscopic surgery was greater than 90% and the complication rate was less than 1%. Monte Carlo simulation was performed for endoscopic surgery versus microscopic surgery at both time horizons. This analysis produced an incremental cost-effectiveness ratio of $80,235 per quality-adjusted life years at 5 years and $40,737 per quality-adjusted life years at 10 years, implying that with increasing time intervals, endoscopic transsphenoidal surgery is the more cost-effective treatment strategy. On the basis of the results of our model, transsphenoidal surgical resection of microprolactinomas, either microsurgical or endoscopic, appears to be more cost-effective than life-long medical therapy in young patients with life expectancy greater than 10 years. We caution that surgical resection for microprolactinomas be performed only in select cases by experienced pituitary surgeons at high-volume centers with high biochemical cure rates and low complication rates. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Illustrating economic evaluation of diagnostic technologies: comparing Helicobacter pylori screening strategies in prevention of gastric cancer in Canada.

    PubMed

    Xie, Feng; O'Reilly, Daria; Ferrusi, Ilia L; Blackhouse, Gord; Bowen, James M; Tarride, Jean-Eric; Goeree, Ron

    2009-05-01

    The aim of this paper is to present an economic evaluation of diagnostic technologies using Helicobacter pylori screening strategies for the prevention of gastric cancer as an illustration. A Markov model was constructed to compare the lifetime cost and effectiveness of 4 potential strategies: no screening, the serology test by enzyme-linked immunosorbent assay (ELISA), the stool antigen test (SAT), and the (13)C-urea breath test (UBT) for the detection of H. pylori among a hypothetical cohort of 10,000 Canadian men aged 35 years. Special parameter consideration included the sensitivity and specificity of each screening strategy, which determined the model structure and treatment regimen. The primary outcome measured was the incremental cost-effectiveness ratio between the screening strategies and the no-screening strategy. Base-case analysis and probabilistic sensitivity analysis were performed using the point estimates of the parameters and Monte Carlo simulations, respectively. Compared with the no-screening strategy in the base-case analysis, the incremental cost-effectiveness ratio was $33,000 per quality-adjusted life-year (QALY) for the ELISA, $29,800 per QALY for the SAT, and $50,400 per QALY for the UBT. The probabilistic sensitivity analysis revealed that the no-screening strategy was more cost effective if the willingness to pay (WTP) was <$20,000 per QALY, while the SAT had the highest probability of being cost effective if the WTP was >$30,000 per QALY. Both the ELISA and the UBT were not cost-effective strategies over a wide range of WTP values. Although the UBT had the highest sensitivity and specificity, either no screening or the SAT could be the most cost-effective strategy depending on the WTP threshold values from an economic perspective. This highlights the importance of economic evaluations of diagnostic technologies.

  16. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  17. Comparison of the co-gasification of sewage sludge and food wastes and cost-benefit analysis of gasification- and incineration-based waste treatment schemes.

    PubMed

    You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa

    2016-10-01

    The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Addressing global uncertainty and sensitivity in first-principles based microkinetic models by an adaptive sparse grid approach

    NASA Astrophysics Data System (ADS)

    Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian

    2018-01-01

    In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.

  19. Risk of introducing exotic fruit flies, Ceratitis capitata, Ceratitis cosyra, and Ceratitis rosa (Diptera: Tephritidae), into southern China.

    PubMed

    Li, Baini; Ma, Jun; Hu, Xuenan; Liu, Haijun; Wu, Jiajiao; Chen, Hongjun; Zhang, Runjie

    2010-08-01

    Exotic fruit flies (Ceratitis spp.) are often serious agricultural pests. Here, we used, pathway analysis and Monte Carlo simulations to assess the risk of introduction of Ceratitis capitata (Wiedemann), Ceratitis cosyra (Walker), and Ceratitis rosa Karsch, into southern China with fruit consignments and incoming travelers. Historical data, expert opinions, relevant literature, and archives were used to set appropriate parameters in the pathway analysis. Based on the ongoing quarantine/ inspection strategies of China, as well as the interception records, we estimated the annual number of each fruit fly species entering Guangdong province undetected with commercially imported fruit, and the associated risk. We also estimated the gross number of pests arriving at Guangdong ports with incoming travelers and the associated risk. Sensitivity analysis also was performed to test the impact of parameter changes and to assess how the risk could be reduced. Results showed that the risk of introduction of the three fruit fly species into southern China with fruit consignments, which are mostly transported by ship, exists but is relatively low. In contrast, the risk of introduction with incoming travelers is high and hence deserves intensive attention. Sensitivity analysis indicated that either ensuring all shipments meet current phytosanitary requirements or increasing the proportion of fruit imports sampled for inspection could substantially reduce the risk associated with commercial imports. Sensitivity analysis also provided justification for banning importation of fresh fruit by international travelers. Thus, inspection and quarantine in conjunction with intensive detection were important mitigation measures to reduce the risk of Ceratitis spp. introduced into China.

  20. Venoarterial extracorporeal membrane oxygenation for patients in shock or cardiac arrest secondary to cardiotoxicant poisoning: a cost-effectiveness analysis.

    PubMed

    St-Onge, Maude; Fan, Eddy; Mégarbane, Bruno; Hancock-Howard, Rebecca; Coyte, Peter C

    2015-04-01

    Venoarterial extracorporeal membrane oxygenation represents an emerging and recommended option to treat life-threatening cardiotoxicant poisoning. The objective of this cost-effectiveness analysis was to estimate the incremental cost-effectiveness ratio of using venoarterial extracorporeal membrane oxygenation for adults in cardiotoxicant-induced shock or cardiac arrest compared with standard care. Adults in shock or in cardiac arrest secondary to cardiotoxicant poisoning were studied with a lifetime horizon and a societal perspective. Venoarterial extracorporeal membrane oxygenation cost effectiveness was calculated using a decision analysis tree, with the effect of the intervention and the probabilities used in the model taken from an observational study representing the highest level of evidence available. The costs (2013 Canadian dollars, where $1.00 Canadian = $0.9562 US dollars) were documented with interviews, reviews of official provincial documents, or published articles. A series of one-way sensitivity analyses and a probabilistic sensitivity analysis using Monte Carlo simulation were used to evaluate uncertainty in the decision model. The cost per life year (LY) gained in the extracorporeal membrane oxygenation group was $145 931/18 LY compared with $88 450/10 LY in the non-extracorporeal membrane oxygenation group. The incremental cost-effectiveness ratio ($7185/LY but $34 311/LY using a more pessimistic approach) was mainly influenced by the probability of survival. The probabilistic sensitivity analysis identified variability in both cost and effectiveness. Venoarterial extracorporeal membrane oxygenation may be cost effective in treating cardiotoxicant poisonings. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Decision Support Tool for Deep Energy Efficiency Retrofits in DoD Installations

    DTIC Science & Technology

    2014-01-01

    representations (HDMR). Chemical Engineering Science, 57, 4445–4460. 2. Sobol ’, I., 2001. Global sensitivity indices for nonlinear mathematical...models and their Monte Carlo estimates. Mathematics and computers in simulation, 55, 271–280. 3. Sobol , I. and Kucherenko, S., 2009. Derivative based...representations (HDMR). Chemical Engineering Science, 57, 4445–4460. 16. Sobol ’, I., 2001. Global sensitivity indices for nonlinear mathematical models and

  2. Monte Carlo simulation of particle-induced bit upsets

    NASA Astrophysics Data System (ADS)

    Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric

    2017-09-01

    We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.

  3. A Statistical Simulation Approach to Safe Life Fatigue Analysis of Redundant Metallic Components

    NASA Technical Reports Server (NTRS)

    Matthews, William T.; Neal, Donald M.

    1997-01-01

    This paper introduces a dual active load path fail-safe fatigue design concept analyzed by Monte Carlo simulation. The concept utilizes the inherent fatigue life differences between selected pairs of components for an active dual path system, enhanced by a stress level bias in one component. The design is applied to a baseline design; a safe life fatigue problem studied in an American Helicopter Society (AHS) round robin. The dual active path design is compared with a two-element standby fail-safe system and the baseline design for life at specified reliability levels and weight. The sensitivity of life estimates for both the baseline and fail-safe designs was examined by considering normal and Weibull distribution laws and coefficient of variation levels. Results showed that the biased dual path system lifetimes, for both the first element failure and residual life, were much greater than for standby systems. The sensitivity of the residual life-weight relationship was not excessive at reliability levels up to R = 0.9999 and the weight penalty was small. The sensitivity of life estimates increases dramatically at higher reliability levels.

  4. Prostate-cancer diagnosis by non-invasive prostatic Zinc mapping using X-Ray Fluorescence (XRF)

    NASA Astrophysics Data System (ADS)

    Cortesi, Marco

    At present, the major screening tools (PSA, DRE, TRUS) for prostate cancer lack sensitivity and specificity, and none can distinguish between low-grade indolent cancer and high-grade lethal one. The situation calls for the promotion of alternative approaches, with better detection sensitivity and specificity, to provide more efficient selection of patients to biopsy and with possible guidance of the biopsy needles. The prime objective of the present work was the development of a novel non-invasive method and tool for promoting detection, localization, diagnosis and follow-up of PCa. The method is based on in-vivo imaging of Zn distribution in the peripheral zone of the prostate, by a trans-rectal X-ray fluorescence (XRF) probe. Local Zn levels, measured in 1--4 mm3 fresh tissue biopsy segments from an extensive clinical study involving several hundred patients, showed an unambiguous correlation with the histological classification of the tissue (Non-Cancer or PCa), and a systematic positive correlation of its depletion level with the cancer-aggressiveness grade (Gleason classification). A detailed analysis of computer-simulated Zn-concentration images (with input parameters from clinical data) disclosed the potential of the method to provide sensitive and specific detection and localization of the lesion, its grade and extension. Furthermore, it also yielded invaluable data on some requirements, such as the image resolution and counting-statistics, requested from a trans-rectal XRF probe for in-vivo recording of prostatic-Zn maps in patients. By means of systematic table-top experiments on prostate-phantoms comprising tumor-like inclusions, followed by dedicated Monte Carlo simulations, the XRF-probe and its components have been designed and optimized. Multi-parameter analysis of the experimental data confirmed the simulation estimations of the XRF detection system in terms of: delivered dose, counting statistics, scanning resolution, target-volume size and the accuracy of locating at various depths of small-volume tumor-like inclusions in tissue-phantoms. The clinical study, the Monte Carlo simulations and the analysis of Zn-map images provided essential information and promising vision on the potential performance of the Zn-based PCa detection concept. Simulations focusing on medical-probe design and its performance at permissible radiation doses yielded positive results - confirmed by a series of systematic laboratory experiments with a table-top XRF system.

  5. The impact of low-Z and high-Z metal implants in IMRT: A Monte Carlo study of dose inaccuracies in commercial dose algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spadea, Maria Francesca, E-mail: mfspadea@unicz.it; Verburg, Joost Mathias; Seco, Joao

    2014-01-15

    Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (Pγ{submore » <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded Pγ{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < Pγ{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.« less

  6. Estimating Uncertainty in N2O Emissions from US Cropland Soils

    USDA-ARS?s Scientific Manuscript database

    A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...

  7. Analysis of Naval Ammunition Stock Positioning

    DTIC Science & Technology

    2015-12-01

    model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17

  8. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    ERIC Educational Resources Information Center

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  9. MCMC multilocus lod scores: application of a new approach.

    PubMed

    George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A

    2005-01-01

    On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).

  10. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  11. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  12. Analysis of intervention strategies for inhalation exposure to polycyclic aromatic hydrocarbons and associated lung cancer risk based on a Monte Carlo population exposure assessment model.

    PubMed

    Zhou, Bin; Zhao, Bin

    2014-01-01

    It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.

  13. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    NASA Astrophysics Data System (ADS)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  14. Monte Carlo modeling of the Siemens Optifocus multileaf collimator.

    PubMed

    Laliena, Victor; García-Romero, Alejandro

    2015-05-01

    We have developed a new component module for the BEAMnrc software package, called SMLC, which models the tongue-and-groove structure of the Siemens Optifocus multileaf collimator. The ultimate goal is to perform accurate Monte Carlo simulations of the IMRT treatments carried out with Optifocus. SMLC has been validated by direct geometry checks and by comparing quantitatively the results of simulations performed with it and with the component module VARMLC. Measurements and Monte Carlo simulations of absorbed dose distributions of radiation fields sensitive to the tongue-and-groove effect have been performed to tune the free parameters of SMLC. The measurements cannot be accurately reproduced with VARMLC. Finally, simulations of a typical IMRT field showed that SMLC improves the agreement with experimental measurements with respect to VARMLC in clinically relevant cases. 87.55. K. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    NASA Astrophysics Data System (ADS)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  16. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  17. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  18. Poster — Thur Eve — 45: Comparison of different Monte Carlo methods of scoring linear energy transfer in modulated proton therapy beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granville, DA; Sawakuchi, GO

    2014-08-15

    In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250more » MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.« less

  19. Monte Carlo Optimization of Crystal Configuration for Pixelated Molecular SPECT Scanners

    NASA Astrophysics Data System (ADS)

    Mahani, Hojjat; Raisali, Gholamreza; Kamali-Asl, Alireza; Ay, Mohammad Reza

    2017-02-01

    Resolution-sensitivity-PDA tradeoff is the most challenging problem in design and optimization of pixelated preclinical SPECT scanners. In this work, we addressed such a challenge from a crystal point-of-view by looking for an optimal pixelated scintillator using GATE Monte Carlo simulation. Various crystal configurations have been investigated and the influence of different pixel sizes, pixel gaps, and three scintillators on tomographic resolution, sensitivity, and PDA of the camera were evaluated. The crystal configuration was then optimized using two objective functions: the weighted-sum and the figure-of-merit methods. The CsI(Na) reveals the highest sensitivity of the order of 43.47 cps/MBq in comparison to the NaI(Tl) and the YAP(Ce), for a 1.5×1.5 mm2 pixel size and 0.1 mm gap. The results show that the spatial resolution, in terms of FWHM, improves from 3.38 to 2.21 mm while the sensitivity simultaneously deteriorates from 42.39 cps/MBq to 27.81 cps/MBq when pixel size varies from 2×2 mm2 to 0.5×0.5 mm2 for a 0.2 mm gap, respectively. The PDA worsens from 0.91 to 0.42 when pixel size decreases from 0.5×0.5 mm2 to 1×1 mm2 for a 0.2 mm gap at 15° incident-angle. The two objective functions agree that the 1.5×1.5 mm2 pixel size and 0.1 mm Epoxy gap CsI(Na) configuration provides the best compromise for small-animal imaging, using the HiReSPECT scanner. Our study highlights that crystal configuration can significantly affect the performance of the camera, and thereby Monte Carlo optimization of pixelated detectors is mandatory in order to achieve an optimal quality tomogram.

  20. Hypersonic Shock Interactions About a 25 deg/65 deg Sharp Double Cone

    NASA Technical Reports Server (NTRS)

    Moss, James N.; LeBeau, Gerald J.; Glass, Christopher E.

    2002-01-01

    This paper presents the results of a numerical study of shock interactions resulting from Mach 10 air flow about a sharp double cone. Computations are made with the direct simulation Monte Carlo (DSMC) method by using two different codes: the G2 code of Bird and the DAC (DSMC Analysis Code) code of LeBeau. The flow conditions are the pretest nominal free-stream conditions specified for the ONERA R5Ch low-density wind tunnel. The focus is on the sensitivity of the interactions to grid resolution while providing information concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.

  1. CAFNA{reg{underscore}sign}, coded aperture fast neutron analysis for contraband detection: Preliminary results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L.; Lanza, R.C.

    1999-12-01

    The authors have developed a near field coded aperture imaging system for use with fast neutron techniques as a tool for the detection of contraband and hidden explosives through nuclear elemental analysis. The technique relies on the prompt gamma rays produced by fast neutron interactions with the object being examined. The position of the nuclear elements is determined by the location of the gamma emitters. For existing fast neutron techniques, in Pulsed Fast Neutron Analysis (PFNA), neutrons are used with very low efficiency; in Fast Neutron Analysis (FNS), the sensitivity for detection of the signature gamma rays is very low.more » For the Coded Aperture Fast Neutron Analysis (CAFNA{reg{underscore}sign}) the authors have developed, the efficiency for both using the probing fast neutrons and detecting the prompt gamma rays is high. For a probed volume of n{sup 3} volume elements (voxels) in a cube of n resolution elements on a side, they can compare the sensitivity with other neutron probing techniques. As compared to PFNA, the improvement for neutron utilization is n{sup 2}, where the total number of voxels in the object being examined is n{sup 3}. Compared to FNA, the improvement for gamma-ray imaging is proportional to the total open area of the coded aperture plane; a typical value is n{sup 2}/2, where n{sup 2} is the number of total detector resolution elements or the number of pixels in an object layer. It should be noted that the actual signal to noise ratio of a system depends also on the nature and distribution of background events and this comparison may reduce somewhat the effective sensitivity of CAFNA. They have performed analysis, Monte Carlo simulations, and preliminary experiments using low and high energy gamma-ray sources. The results show that a high sensitivity 3-D contraband imaging and detection system can be realized by using CAFNA.« less

  2. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  3. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  4. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  5. Statistical sensitivity on right-handed currents in presence of eV scale sterile neutrinos with KATRIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinbrink, Nicholas M.N.; Weinheimer, Christian; Glück, Ferenc

    The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2){sub R} symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritiummore » β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.« less

  6. Costs and Effects of a Telephonic Diabetes Self-Management Support Intervention Using Health Educators

    PubMed Central

    Schechter, Clyde B.; Walker, Elizabeth A.; Ortega, Felix M.; Chamany, Shadi; Silver, Lynn D.

    2015-01-01

    Background Self-management is crucial to successful glycemic control in patients with diabetes, yet it requires patients to initiate and sustain complicated behavioral changes. Support programs can improve glycemic control, but may be expensive to implement. We report here an analysis of the costs of a successful telephone-based self-management support program delivered by lay health educators utilizing a municipal health department A1c registry, and relate them to near-term effectiveness. Methods Costs of implementation were assessed by micro-costing of all resources used. Per-capita costs and cost-effectiveness ratios from the perspective of the service provider are estimated for net A1c reduction, and percentages of patients achieving A1c reductions of 0.5 and 1.0 percentage points. Oneway sensitivity analyses of key cost elements, and a Monte Carlo sensitivity analysis are reported. Results The telephone intervention was provided to 443 people at a net cost of $187.61 each. Each percentage point of net A1c reduction was achieved at a cost of $464.41. Labor costs were the largest component of costs, and cost-effectiveness was most sensitive to the wages paid to the health educators. Conclusions Effective telephone-based self-management support for people in poor diabetes control can be delivered by health educators at moderate cost relative to the gains achieved. The costs of doing so are most sensitive to the prevailing wage for the health educators. PMID:26750743

  7. Costs and effects of a telephonic diabetes self-management support intervention using health educators.

    PubMed

    Schechter, Clyde B; Walker, Elizabeth A; Ortega, Felix M; Chamany, Shadi; Silver, Lynn D

    2016-03-01

    Self-management is crucial to successful glycemic control in patients with diabetes, yet it requires patients to initiate and sustain complicated behavioral changes. Support programs can improve glycemic control, but may be expensive to implement. We report here an analysis of the costs of a successful telephone-based self-management support program delivered by lay health educators utilizing a municipal health department A1c registry, and relate them to near-term effectiveness. Costs of implementation were assessed by micro-costing of all resources used. Per-capita costs and cost-effectiveness ratios from the perspective of the service provider are estimated for net A1c reduction, and percentages of patients achieving A1c reductions of 0.5 and 1.0 percentage points. One-way sensitivity analyses of key cost elements, and a Monte Carlo sensitivity analysis are reported. The telephone intervention was provided to 443 people at a net cost of $187.61 each. Each percentage point of net A1c reduction was achieved at a cost of $464.41. Labor costs were the largest component of costs, and cost-effectiveness was most sensitive to the wages paid to the health educators. Effective telephone-based self-management support for people in poor diabetes control can be delivered by health educators at moderate cost relative to the gains achieved. The costs of doing so are most sensitive to the prevailing wage for the health educators. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Statistical sensitivity on right-handed currents in presence of eV scale sterile neutrinos with KATRIN

    NASA Astrophysics Data System (ADS)

    Steinbrink, Nicholas M. N.; Glück, Ferenc; Heizmann, Florian; Kleesiek, Marco; Valerius, Kathrin; Weinheimer, Christian; Hannestad, Steen

    2017-06-01

    The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2)R symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritium β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.

  9. [Cost-effectiveness analysis of an alternative for the provision of primary health care for beneficiaries of Seguro Popular in Mexico].

    PubMed

    Figueroa-Lara, Alejandro; González-Block, Miguel A

    2016-01-01

    To estimate the cost-effectiveness ratio of public and private health care providers funded by Seguro Popular. A pilot contracting primary care health care scheme in the state of Hidalgo, Mexico, was evaluated through a population survey to assess quality of care and detection decreased of vision. Costs were assessed from the payer perspective using institutional sources.The alternatives analyzed were a private provider with capitated and performance-based payment modalities, and a public provider funded through budget subsidies. Sensitivity analysis was performed using Monte Carlo simulations. The private provider is dominant in the quality and cost-effective detection of decreased vision. Strategic purchasing of private providers of primary care has shown promising results as an alternative to improving quality of health services and reducing costs.

  10. Projected Hg dietary exposure of 3 bird species nesting on a contaminated floodplain (South River, Virginia, USA).

    PubMed

    Wang, Jincheng; Newman, Michael C

    2013-04-01

    Dietary Hg exposure was modeled for Carolina wren (Thryothorus ludovicianus), Eastern song sparrow (Melospiza melodia), and Eastern screech owl (Otus asio) nesting on the contaminated South River floodplain (Virginia, USA). Parameterization of Monte-Carlo models required formal expert elicitation to define bird body weight and feeding ecology characteristics because specific information was either unavailable in the published literature or too difficult to collect reliably by field survey. Mercury concentrations and weights for candidate food items were obtained directly by field survey. Simulations predicted the probability that an adult bird during breeding season would ingest specific amounts of Hg during daily foraging and the probability that the average Hg ingestion rate for the breeding season of an adult bird would exceed published rates reported to cause harm to other birds (>100 ng total Hg/g body weight per day). Despite the extensive floodplain contamination, the probabilities that these species' average ingestion rates exceeded the threshold value were all <0.01. Sensitivity analysis indicated that overall food ingestion rate was the most important factor determining projected Hg ingestion rates. Expert elicitation was useful in providing sufficiently reliable information for Monte-Carlo simulation. Copyright © 2013 SETAC.

  11. Uncertainty Quantification of Medium-Term Heat Storage From Short-Term Geophysical Experiments Using Bayesian Evidential Learning

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2018-04-01

    In theory, aquifer thermal energy storage (ATES) systems can recover in winter the heat stored in the aquifer during summer to increase the energy efficiency of the system. In practice, the energy efficiency is often lower than expected from simulations due to spatial heterogeneity of hydraulic properties or non-favorable hydrogeological conditions. A proper design of ATES systems should therefore consider the uncertainty of the prediction related to those parameters. We use a novel framework called Bayesian Evidential Learning (BEL) to estimate the heat storage capacity of an alluvial aquifer using a heat tracing experiment. BEL is based on two main stages: pre- and postfield data acquisition. Before data acquisition, Monte Carlo simulations and global sensitivity analysis are used to assess the information content of the data to reduce the uncertainty of the prediction. After data acquisition, prior falsification and machine learning based on the same Monte Carlo are used to directly assess uncertainty on key prediction variables from observations. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data, without any explicit full model inversion. We demonstrate the methodology in field conditions and validate the framework using independent measurements.

  12. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  13. Value for money? Array genomic hybridization for diagnostic testing for genetic causes of intellectual disability.

    PubMed

    Regier, Dean A; Friedman, Jan M; Marra, Carlo A

    2010-05-14

    Array genomic hybridization (AGH) provides a higher detection rate than does conventional cytogenetic testing when searching for chromosomal imbalance causing intellectual disability (ID). AGH is more costly than conventional cytogenetic testing, and it remains unclear whether AGH provides good value for money. Decision analytic modeling was used to evaluate the trade-off between costs, clinical effectiveness, and benefit of an AGH testing strategy compared to a conventional testing strategy. The trade-off between cost and effectiveness was expressed via the incremental cost-effectiveness ratio. Probabilistic sensitivity analysis was performed via Monte Carlo simulation. The baseline AGH testing strategy led to an average cost increase of $217 (95% CI $172-$261) per patient and an additional 8.2 diagnoses in every 100 tested (0.082; 95% CI 0.044-0.119). The mean incremental cost per additional diagnosis was $2646 (95% CI $1619-$5296). Probabilistic sensitivity analysis demonstrated that there was a 95% probability that AGH would be cost effective if decision makers were willing to pay $4550 for an additional diagnosis. Our model suggests that using AGH instead of conventional karyotyping for most ID patients provides good value for money. Deterministic sensitivity analysis found that employing AGH after first-line cytogenetic testing had proven uninformative did not provide good value for money when compared to using AGH as first-line testing. Copyright (c) 2010 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  14. Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.

    PubMed

    Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A

    2016-05-01

    A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Lecture Notes on Criticality Safety Validation Using MCNP & Whisper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less

  16. An examination of the sensitivity and systematic error of the NASA GEMS Bragg Reflection Polarimeter using Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Allured, Ryan; Okajima, Takashi; Soufli, Regina; Fernández-Perea, Mónica; Daly, Ryan O.; Marlowe, Hannah; Griffiths, Scott T.; Pivovaroff, Michael J.; Kaaret, Philip

    2012-10-01

    The Bragg Reflection Polarimeter (BRP) on the NASA Gravity and Extreme Magnetism Small Explorer Mission is designed to measure the linear polarization of astrophysical sources in a narrow band centered at about 500 eV. X-rays are focused by Wolter I mirrors through a 4.5 m focal length to a time projection chamber (TPC) polarimeter, sensitive between 2{10 keV. In this optical path lies the BRP multilayer reflector at a nominal 45 degree incidence angle. The reflector reflects soft X-rays to the BRP detector and transmits hard X-rays to the TPC. As the spacecraft rotates about the optical axis, the reflected count rate will vary depending on the polarization of the incident beam. However, false polarization signals may be produced due to misalignments and spacecraft pointing wobble. Monte-Carlo simulations have been carried out, showing that the false modulation is below the statistical uncertainties for the expected focal plane offsets of < 2 mm.

  17. Monte Carlo study for physiological interference reduction in near-infrared spectroscopy based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Sun, JinWei; Rolfe, Peter

    2010-12-01

    Near-infrared spectroscopy (NIRS) can be used as the basis of non-invasive neuroimaging that may allow the measurement of haemodynamic changes in the human brain evoked by applied stimuli. Since this technique is very sensitive, physiological interference arising from the cardiac cycle and breathing can significantly affect the signal quality. Such interference is difficult to remove by conventional techniques because it occurs not only in the extracerebral layer but also in the brain tissue itself. Previous work on this problem employing temporal filtering, spatial filtering, and adaptive filtering have exhibited good performance for recovering brain activity data in evoked response studies. However, in this study, we present a time-frequency adaptive method for physiological interference reduction based on the combination of empirical mode decomposition (EMD) and Hilbert spectral analysis (HSA). Monte Carlo simulations based on a five-layered slab model of a human adult head were implemented to evaluate our methodology. We applied an EMD algorithm to decompose the NIRS time series derived from Monte Carlo simulations into a series of intrinsic mode functions (IMFs). In order to identify the IMFs associated with symmetric interference, the extracted components were then Hilbert transformed from which the instantaneous frequencies could be acquired. By reconstructing the NIRS signal by properly selecting IMFs, we determined that the evoked brain response is effectively filtered out with even higher signal-to-noise ratio (SNR). The results obtained demonstrated that EMD, combined with HSA, can effectively separate, identify and remove the contamination from the evoked brain response obtained with NIRS using a simple single source-detector pair.

  18. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    PubMed

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  19. A seismic hazard uncertainty analysis for the New Madrid seismic zone

    USGS Publications Warehouse

    Cramer, C.H.

    2001-01-01

    A review of the scientific issues relevant to characterizing earthquake sources in the New Madrid seismic zone has led to the development of a logic tree of possible alternative parameters. A variability analysis, using Monte Carlo sampling of this consensus logic tree, is presented and discussed. The analysis shows that for 2%-exceedence-in-50-year hazard, the best-estimate seismic hazard map is similar to previously published seismic hazard maps for the area. For peak ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 s (0.2 and 1.0 s Sa), the coefficient of variation (COV) representing the knowledge-based uncertainty in seismic hazard can exceed 0.6 over the New Madrid seismic zone and diminishes to about 0.1 away from areas of seismic activity. Sensitivity analyses show that the largest contributor to PGA, 0.2 and 1.0 s Sa seismic hazard variability is the uncertainty in the location of future 1811-1812 New Madrid sized earthquakes. This is followed by the variability due to the choice of ground motion attenuation relation, the magnitude for the 1811-1812 New Madrid earthquakes, and the recurrence interval for M>6.5 events. Seismic hazard is not very sensitive to the variability in seismogenic width and length. Published by Elsevier Science B.V.

  20. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  1. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  2. Cost-effectiveness analysis of 30-month vs 12-month dual antiplatelet therapy with clopidogrel and aspirin after drug-eluting stents in patients with acute coronary syndrome.

    PubMed

    Jiang, Minghuan; You, Joyce H S

    2017-10-01

    Continuation of dual antiplatelet therapy (DAPT) beyond 1 year reduces late stent thrombosis and ischemic events after drug-eluting stents (DES) but increases risk of bleeding. We hypothesized that extending DAPT from 12 months to 30 months in patients with acute coronary syndrome (ACS) after DES is cost-effective. A lifelong decision-analytic model was designed to simulate 2 antiplatelet strategies in event-free ACS patients who had completed 12-month DAPT after DES: aspirin monotherapy (75-162 mg daily) and continuation of DAPT (clopidogrel 75 mg daily plus aspirin 75-162 mg daily) for 18 months. Clinical event rates, direct medical costs, and quality-adjusted life-years (QALYs) gained were the primary outcomes from the US healthcare provider perspective. Base-case results showed DAPT continuation gained higher QALYs (8.1769 vs 8.1582 QALYs) at lower cost (USD42 982 vs USD44 063). One-way sensitivity analysis found that base-case QALYs were sensitive to odds ratio (OR) of cardiovascular death with DAPT continuation and base-case cost was sensitive to OR of nonfatal stroke with DAPT continuation. DAPT continuation remained cost-effective when the ORs of nonfatal stroke and cardiovascular death were below 1.241 and 1.188, respectively. In probabilistic sensitivity analysis, DAPT continuation was the preferred strategy in 74.75% of 10 000 Monte Carlo simulations at willingness-to-pay threshold of 50 000 USD/QALYs. Continuation of DAPT appears to be cost-effective in ACS patients who were event-free for 12-month DAPT after DES. The cost-effectiveness of DAPT for 30 months was highly subject to the OR of nonfatal stroke and OR of death with DAPT continuation. © 2017 Wiley Periodicals, Inc.

  3. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  4. Cancer Risk Assessment of Polycyclic Aromatic Hydrocarbons in the Soils and Sediments of India: A Meta-Analysis.

    PubMed

    Tarafdar, Abhrajyoti; Sinha, Alok

    2017-10-01

    A carcinogenic risk assessment of polycyclic aromatic hydrocarbons in soils and sediments was conducted using the probabilistic approach from a national perspective. Published monitoring data of polycyclic aromatic hydrocarbons present in soils and sediments at different study points across India were collected and converted to their corresponding BaP equivalent concentrations. These BaP equivalent concentrations were used to evaluate comprehensive cancer risk for two different age groups. Monte Carlo simulation and sensitivity analysis were applied to quantify uncertainties of risk estimation. The analysis denotes 90% cancer risk value of 1.770E-5 for children and 3.156E-5 for adults at heavily polluted site soils. Overall carcinogenic risks of polycyclic aromatic hydrocarbons in soils of India were mostly in acceptance limits. However, the food ingestion exposure route for sediments leads them to a highly risked zone. The 90% risk values from sediments are 7.863E-05 for children and 3.999E-04 for adults. Sensitivity analysis reveals exposure duration and relative skin adherence factor for soil as the most influential parameter of the assessment, followed by BaP equivalent concentration of polycyclic aromatic hydrocarbons. For sediments, biota to sediment accumulation factor of fish in terms of BaP is most sensitive on the total outcome, followed by BaP equivalent and exposure duration. Individual exposure route analysis showed dermal contact for soils and food ingestion for sediments as the main exposure pathway. Some specific locations such as surrounding areas of Bhavnagar, Raniganj, Sunderban, Raipur, and Delhi demand potential strategies of carcinogenic risk management and reduction. The current study is probably the first attempt to provide information on the carcinogenic risk of polycyclic aromatic hydrocarbons in soil and sediments across India.

  5. Cancer Risk Assessment of Polycyclic Aromatic Hydrocarbons in the Soils and Sediments of India: A Meta-Analysis

    NASA Astrophysics Data System (ADS)

    Tarafdar, Abhrajyoti; Sinha, Alok

    2017-10-01

    A carcinogenic risk assessment of polycyclic aromatic hydrocarbons in soils and sediments was conducted using the probabilistic approach from a national perspective. Published monitoring data of polycyclic aromatic hydrocarbons present in soils and sediments at different study points across India were collected and converted to their corresponding BaP equivalent concentrations. These BaP equivalent concentrations were used to evaluate comprehensive cancer risk for two different age groups. Monte Carlo simulation and sensitivity analysis were applied to quantify uncertainties of risk estimation. The analysis denotes 90% cancer risk value of 1.770E-5 for children and 3.156E-5 for adults at heavily polluted site soils. Overall carcinogenic risks of polycyclic aromatic hydrocarbons in soils of India were mostly in acceptance limits. However, the food ingestion exposure route for sediments leads them to a highly risked zone. The 90% risk values from sediments are 7.863E-05 for children and 3.999E-04 for adults. Sensitivity analysis reveals exposure duration and relative skin adherence factor for soil as the most influential parameter of the assessment, followed by BaP equivalent concentration of polycyclic aromatic hydrocarbons. For sediments, biota to sediment accumulation factor of fish in terms of BaP is most sensitive on the total outcome, followed by BaP equivalent and exposure duration. Individual exposure route analysis showed dermal contact for soils and food ingestion for sediments as the main exposure pathway. Some specific locations such as surrounding areas of Bhavnagar, Raniganj, Sunderban, Raipur, and Delhi demand potential strategies of carcinogenic risk management and reduction. The current study is probably the first attempt to provide information on the carcinogenic risk of polycyclic aromatic hydrocarbons in soil and sediments across India.

  6. Risk Assessment and Prediction of Flyrock Distance by Combined Multiple Regression Analysis and Monte Carlo Simulation of Quarry Blasting

    NASA Astrophysics Data System (ADS)

    Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh

    2016-09-01

    Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.

  7. Performance of a proportion-based approach to meta-analytic moderator estimation: results from Monte Carlo simulations.

    PubMed

    Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying

    2012-03-01

    This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Comprehensive Monte-Carlo simulator for optimization of imaging parameters for high sensitivity detection of skin cancer at the THz

    NASA Astrophysics Data System (ADS)

    Ney, Michael; Abdulhalim, Ibrahim

    2016-03-01

    Skin cancer detection at its early stages has been the focus of a large number of experimental and theoretical studies during the past decades. Among these studies two prominent approaches presenting high potential are reflectometric sensing at the THz wavelengths region and polarimetric imaging techniques in the visible wavelengths. While THz radiation contrast agent and source of sensitivity to cancer related tissue alterations was considered to be mainly the elevated water content in the cancerous tissue, the polarimetric approach has been verified to enable cancerous tissue differentiation based on cancer induced structural alterations to the tissue. Combining THz with the polarimetric approach, which is considered in this study, is examined in order to enable higher detection sensitivity than previously pure reflectometric THz measurements. For this, a comprehensive MC simulation of radiative transfer in a complex skin tissue model fitted for the THz domain that considers the skin`s stratified structure, tissue material optical dispersion modeling, surface roughness, scatterers, and substructure organelles has been developed. Additionally, a narrow beam Mueller matrix differential analysis technique is suggested for assessing skin cancer induced changes in the polarimetric image, enabling the tissue model and MC simulation to be utilized for determining the imaging parameters resulting in maximal detection sensitivity.

  9. SU-E-T-391: Assessment and Elimination of the Angular Dependence of the Response of the NanoDot OSLD System in MV Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, J; University of Sydney, Sydney; RMIT University, Melbourne

    2014-06-01

    Purpose: Assess the angular dependence of the nanoDot OSLD system in MV X-ray beams at depths and mitigate this dependence for measurements in phantoms. Methods: Measurements for 6 MV photons at 3 cm and 10 cm depth and Monte Carlo simulations were performed. Two special holders were designed which allow a nanoDot dosimeter to be rotated around the center of its sensitive volume (5 mm diameter disk). The first holder positions the dosimeter disk perpendicular to the beam (en-face). It then rotates until the disk is parallel with the beam (edge on). This is referred to as Setup 1. Themore » second holder positions the disk parallel to the beam (edge on) for all angles (Setup 2). Monte Carlo simulations using GEANT4 considered detector and housing in detail based on microCT data. Results: An average drop in response by 1.4±0.7% (measurement) and 2.1±0.3% (Monte Carlo) for the 90° orientation compared to 0° was found for Setup 1. Monte Carlo simulations also showed a strong dependence of the effect on the composition of the sensitive layer. Assuming 100% active material (Al??O??) results in a 7% drop in response for 90° compared to 0°. Assuming the layer to be completely water, results in a flat response (within simulation uncertainty of about 1%). For Setup 2, measurements and Monte Carlo simulations found the angular dependence of the dosimeter to be below 1% and within the measurement uncertainty. Conclusion: The nanoDot dosimeter system exhibits a small angular dependence off approximately 2%. Changing the orientation of the dosimeter so that a coplanar beam arrangement always hits the detector material edge on reduces the angular dependence to within the measurement uncertainty of about 1%. This makes the dosimeter more attractive for phantom based clinical measurements and audits with multiple coplanar beams. The Australian Clinical Dosimetry Service is a joint initiative between the Australian Department of Health and the Australian Radiation Protection and Nuclear Safety Agency.« less

  10. Levofloxacin Penetration into Epithelial Lining Fluid as Determined by Population Pharmacokinetic Modeling and Monte Carlo Simulation

    PubMed Central

    Drusano, G. L.; Preston, S. L.; Gotfried, M. H.; Danziger, L. H.; Rodvold, K. A.

    2002-01-01

    Levofloxacin was administered orally to steady state to volunteers randomly in doses of 500 and 750 mg. Plasma and epithelial lining fluid (ELF) samples were obtained at 4, 12, and 24 h after the final dose. All data were comodeled in a population pharmacokinetic analysis employing BigNPEM. Penetration was evaluated from the population mean parameter vector values and from the results of a 1,000-subject Monte Carlo simulation. Evaluation from the population mean values demonstrated a penetration ratio (ELF/plasma) of 1.16. The Monte Carlo simulation provided a measure of dispersion, demonstrating a mean ratio of 3.18, with a median of 1.43 and a 95% confidence interval of 0.14 to 19.1. Population analysis with Monte Carlo simulation provides the best and least-biased estimate of penetration. It also demonstrates clearly that we can expect differences in penetration between patients. This analysis did not deal with inflammation, as it was performed in volunteers. The influence of lung pathology on penetration needs to be examined. PMID:11796385

  11. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  12. SENSITIVITY OF THE NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION MULTILAYER MODEL TO INSTRUMENT ERROR AND PARAMETERIZATION UNCERTAINTY

    EPA Science Inventory

    The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...

  13. Nonintravenous rescue medications for pediatric status epilepticus: A cost-effectiveness analysis.

    PubMed

    Sánchez Fernández, Iván; Gaínza-Lein, Marina; Loddenkemper, Tobias

    2017-08-01

    To quantify the cost-effectiveness of rescue medications for pediatric status epilepticus: rectal diazepam, nasal midazolam, buccal midazolam, intramuscular midazolam, and nasal lorazepam. Decision analysis model populated with effectiveness data from the literature and cost data from publicly available market prices. The primary outcome was cost per seizure stopped ($/SS). One-way sensitivity analyses and second-order Monte Carlo simulations evaluated the robustness of the results across wide variations of the input parameters. The most cost-effective rescue medication was buccal midazolam (incremental cost-effectiveness ratio ([ICER]: $13.16/SS) followed by nasal midazolam (ICER: $38.19/SS). Nasal lorazepam (ICER: -$3.8/SS), intramuscular midazolam (ICER: -$64/SS), and rectal diazepam (ICER: -$2,246.21/SS) are never more cost-effective than the other options at any willingness to pay. One-way sensitivity analysis showed the following: (1) at its current effectiveness, rectal diazepam would become the most cost-effective option only if its cost was $6 or less, and (2) at its current cost, rectal diazepam would become the most cost-effective option only if effectiveness was higher than 0.89 (and only with very high willingness to pay of $2,859/SS to $31,447/SS). Second-order Monte Carlo simulations showed the following: (1) nasal midazolam and intramuscular midazolam were the more effective options; (2) the more cost-effective option was buccal midazolam for a willingness to pay from $14/SS to $41/SS and nasal midazolam for a willingness to pay above $41/SS; (3) cost-effectiveness overlapped for buccal midazolam, nasal lorazepam, intramuscular midazolam, and nasal midazolam; and (4) rectal diazepam was not cost-effective at any willingness to pay, and this conclusion remained extremely robust to wide variations of the input parameters. For pediatric status epilepticus, buccal midazolam and nasal midazolam are the most cost-effective nonintravenous rescue medications in the United States. Rectal diazepam is not a cost-effective alternative, and this conclusion remains extremely robust to wide variations of the input parameters. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  14. SU-E-T-292: Sensitivity of Fractionated Lung IMPT Treatments to Setup Uncertainties and Motion Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowdell, S; Grassberger, C; Paganetti, H

    2014-06-01

    Purpose: Evaluate the sensitivity of intensity-modulated proton therapy (IMPT) lung treatments to systematic and random setup uncertainties combined with motion effects. Methods: Treatment plans with single-field homogeneity restricted to ±20% (IMPT-20%) were compared to plans with no restriction (IMPT-full). 4D Monte Carlo simulations were performed for 10 lung patients using the patient CT geometry with either ±5mm systematic or random setup uncertainties applied over a 35 × 2.5Gy(RBE) fractionated treatment course. Intra-fraction, inter-field and inter-fraction motions were investigated. 50 fractionated treatments with systematic or random setup uncertainties applied to each fraction were generated for both IMPT delivery methods and threemore » energy-dependent spot sizes (big spots - BS σ=18-9mm, intermediate spots - IS σ=11-5mm, small spots - SS σ=4-2mm). These results were compared to a Monte Carlo recalculation of the original treatment plan, with results presented as the difference in EUD (ΔEUD), V{sub 95} (ΔV{sub 95}) and target homogeneity (ΔD{sub 1}–D{sub 99}) between the 4D simulations and the Monte Carlo calculation on the planning CT. Results: The standard deviations in the ΔEUD were 1.95±0.47(BS), 1.85±0.66(IS) and 1.31±0.35(SS) times higher in IMPT-full compared to IMPT-20% when ±5mm systematic setup uncertainties were applied. The ΔV{sub 95} variations were also 1.53±0.26(BS), 1.60±0.50(IS) and 1.38±0.38(SS) times higher for IMPT-full. For random setup uncertainties, the standard deviations of the ΔEUD from 50 simulated fractionated treatments were 1.94±0.90(BS), 2.13±1.08(IS) and 1.45±0.57(SS) times higher in IMPTfull compared to IMPT-20%. For all spot sizes considered, the ΔD{sub 1}-D{sub 99} coincided within the uncertainty limits for the two IMPT delivery methods, with the mean value always higher for IMPT-full. Statistical analysis showed significant differences between the IMPT-full and IMPT-20% dose distributions for the majority of scenarios studied. Conclusion: Lung IMPT-full treatments are more sensitive to both systematic and random setup uncertainties compared to IMPT-20%. This work was supported by the NIH R01 CA111590.« less

  15. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.

  16. Theoretical study on sensitivity enhancement in energy-deficit region of chemically amplified resists used for extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro

    2017-10-01

    The role of photons in lithography is to transfer the energy and information required for resist pattern formation. In the information-deficit region, a trade-off relationship is observed between line edge roughness (LER) and sensitivity. However, the sensitivity can be increased without increasing LER in the energy-deficit region. In this study, the sensitivity enhancement limit was investigated, assuming line-and-space patterns with a half-pitch of 11 nm. LER was calculated by a Monte Carlo method. It was unrealistic to increase the sensitivity twofold while keeping the line width roughness (LWR) within 10% critical dimension (CD), whereas the twofold sensitivity enhancement with 20% CD LWR was feasible. The requirements are roughly that the sensitization distance should be less than 2 nm and that the total sensitizer concentration should be higher than 0.3 nm-3.

  17. Energy dispersive X-ray fluorescence spectroscopy/Monte Carlo simulation approach for the non-destructive analysis of corrosion patina-bearing alloys in archaeological bronzes: The case of the bowl from the Fareleira 3 site (Vidigueira, South Portugal)

    NASA Astrophysics Data System (ADS)

    Bottaini, C.; Mirão, J.; Figuereido, M.; Candeias, A.; Brunetti, A.; Schiavon, N.

    2015-01-01

    Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample.

  18. GLOBAL REFERENCE ATMOSPHERIC MODELS FOR AEROASSIST APPLICATIONS

    NASA Technical Reports Server (NTRS)

    Duvall, Aleta; Justus, C. G.; Keller, Vernon W.

    2005-01-01

    Aeroassist is a broad category of advanced transportation technology encompassing aerocapture, aerobraking, aeroentry, precision landing, hazard detection and avoidance, and aerogravity assist. The eight destinations in the Solar System with sufficient atmosphere to enable aeroassist technology are Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Saturn's moon Titan. Engineering-level atmospheric models for five of these targets - Earth, Mars, Titan, Neptune, and Venus - have been developed at NASA's Marshall Space Flight Center. These models are useful as tools in mission planning and systems analysis studies associated with aeroassist applications. The series of models is collectively named the Global Reference Atmospheric Model or GRAM series. An important capability of all the models in the GRAM series is their ability to simulate quasi-random perturbations for Monte Carlo analysis in developing guidance, navigation and control algorithms, for aerothermal design, and for other applications sensitive to atmospheric variability. Recent example applications are discussed.

  19. An Excel®-based visualization tool of 2-D soil gas concentration profiles in petroleum vapor intrusion

    PubMed Central

    Verginelli, Iason; Yao, Yijun; Suuberg, Eric M.

    2017-01-01

    In this study we present a petroleum vapor intrusion tool implemented in Microsoft® Excel® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet. PMID:28163564

  20. An Excel®-based visualization tool of 2-D soil gas concentration profiles in petroleum vapor intrusion.

    PubMed

    Verginelli, Iason; Yao, Yijun; Suuberg, Eric M

    2016-01-01

    In this study we present a petroleum vapor intrusion tool implemented in Microsoft ® Excel ® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet.

  1. Techniques for detecting effects of urban and rural land-use practices on stream-water chemistry in selected watersheds in Texas, Minnesota,and Illinois

    USGS Publications Warehouse

    Walker, J.F.

    1993-01-01

    Selected statistical techniques were applied to three urban watersheds in Texas and Minnesota and three rural watersheds in Illinois. For the urban watersheds, single- and paired-site data-collection strategies were considered. The paired-site strategy was much more effective than the singlesite strategy for detecting changes. Analysis of storm load regression residuals demonstrated the potential utility of regressions for variability reduction. For the rural watersheds, none of the selected techniques were effective at identifying changes, primarily due to a small degree of management-practice implementation, potential errors introduced through the estimation of storm load, and small sample sizes. A Monte Carlo sensitivity analysis was used to determine the percent change in water chemistry that could be detected for each watershed. In most instances, the use of regressions improved the ability to detect changes.

  2. Research on the effects of geometrical and material uncertainties on the band gap of the undulated beam

    NASA Astrophysics Data System (ADS)

    Li, Yi; Xu, Yanlong

    2017-09-01

    Considering uncertain geometrical and material parameters, the lower and upper bounds of the band gap of an undulated beam with periodically arched shape are studied by the Monte Carlo Simulation (MCS) and interval analysis based on the Taylor series. Given the random variations of the overall uncertain variables, scatter plots from the MCS are used to analyze the qualitative sensitivities of the band gap respect to these uncertainties. We find that the influence of uncertainty of the geometrical parameter on the band gap of the undulated beam is stronger than that of the material parameter. And this conclusion is also proved by the interval analysis based on the Taylor series. Our methodology can give a strategy to reduce the errors between the design and practical values of the band gaps by improving the accuracy of the specially selected uncertain design variables of the periodical structures.

  3. Cost-minimization analysis of phenytoin and fosphenytoin in the emergency department.

    PubMed

    Touchette, D R; Rhoney, D H

    2000-08-01

    To determine the value of fosphenytoin compared with phenytoin for treating patients admitted to an emergency department following a seizure. Cost-minimization analysis performed from a hospital perspective. Hospital emergency department. Two hundred fifty-six patients participating in a comparative clinical trial. Estimation of adverse event rates and resource use. In our base case, phenytoin was the preferred option, with an expected total treatment cost of $5.39 compared with $110.14 for fosphenytoin. One-way sensitivity analyses showed that the frequency and cost of treating purple glove syndrome (PGS) possibly could affect the decision. Monte Carlo simulation showed phenytoin to be the preferred option 97.3% of the time. When variable costs of care are used to calculate the value of phenytoin compared with fosphenytoin in the emergency department, phenytoin is preferred. The decision to administer phenytoin was very robust and changed only when both the frequency and cost of PGS was high.

  4. MUSiC - Model-independent search for deviations from Standard Model predictions in CMS

    NASA Astrophysics Data System (ADS)

    Pieta, Holger

    2010-02-01

    We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )

  5. Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test

    NASA Astrophysics Data System (ADS)

    Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.

    We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.

  6. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  7. RPC PET: Status and perspectives

    NASA Astrophysics Data System (ADS)

    Couceiro, M.; Blanco, A.; Ferreira, Nuno C.; Ferreira Marques, R.; Fonte, P.; Lopes, L.

    2007-10-01

    The status of the resistive plate chamber (RPC)-PET technology for small animals is briefly reviewed and its sensitivity performance for human PET studied through Monte-Carlo simulations. The cost-effectiveness of these detectors and their very good timing characteristics open the possibility to build affordable Time of Flight (TOF)-PET systems with very large fields of view. Simulations suggest that the sensitivity of such systems for human whole-body screening, under reasonable assumptions, may exceed the present crystal-based PET technology by a factor up to 20.

  8. Parametric Covariance Model for Horizon-Based Optical Navigation

    NASA Technical Reports Server (NTRS)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  9. Probing color coherence effects in pp collisions at √s = 7 TeV

    DOE PAGES

    Chatrchyan, Serguei

    2014-06-11

    A study of color coherence effects in pp collisions at a center-of-mass energy of 7 TeV is presented. The data used in the analysis were collected in 2010 with the CMS detector at the LHC and correspond to an integrated luminosity of 36 inverse picobarns. Events are selected that contain at least three jets and where the two jets with the largest transverse momentum exhibit a back-to-back topology. The measured angular correlation between the second- and third-leading jet is shown to be sensitive to color coherence effects, and is compared to the predictions of Monte Carlo models with various implementationsmore » of color coherence. None of the models describe the data satisfactorily.« less

  10. Probing color coherence effects in pp collisions at [Formula: see text].

    PubMed

    Chatrchyan, S; Khachatryan, V; Sirunyan, A M; Tumasyan, A; Adam, W; Bergauer, T; Dragicevic, M; Erö, J; Fabjan, C; Friedl, M; Frühwirth, R; Ghete, V M; Hörmann, N; Hrubec, J; Jeitler, M; Kiesenhofer, W; Knünz, V; Krammer, M; Krätschmer, I; Liko, D; Mikulec, I; Rabady, D; Rahbaran, B; Rohringer, C; Rohringer, H; Schöfbeck, R; Strauss, J; Taurok, A; Treberer-Treberspurg, W; Waltenberger, W; Wulz, C-E; Mossolov, V; Shumeiko, N; Gonzalez, J Suarez; Alderweireldt, S; Bansal, M; Bansal, S; Cornelis, T; De Wolf, E A; Janssen, X; Knutsson, A; Luyckx, S; Mucibello, L; Ochesanu, S; Roland, B; Rougny, R; Staykova, Z; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Van Spilbeeck, A; Blekman, F; Blyweert, S; D'Hondt, J; Kalogeropoulos, A; Keaveney, J; Lowette, S; Maes, M; Olbrechts, A; Tavernier, S; Van Doninck, W; Van Mulders, P; Van Onsem, G P; Villella, I; Caillol, C; Clerbaux, B; De Lentdecker, G; Favart, L; Gay, A P R; Hreus, T; Léonard, A; Marage, P E; Mohammadi, A; Perniè, L; Reis, T; Seva, T; Thomas, L; Vander Velde, C; Vanlaer, P; Wang, J; Adler, V; Beernaert, K; Benucci, L; Cimmino, A; Costantini, S; Dildick, S; Garcia, G; Klein, B; Lellouch, J; Marinov, A; Mccartin, J; Rios, A A Ocampo; Ryckbosch, D; Sigamani, M; Strobbe, N; Thyssen, F; Tytgat, M; Walsh, S; Yazgan, E; Zaganidis, N; Basegmez, S; Beluffi, C; Bruno, G; Castello, R; Caudron, A; Ceard, L; Da Silveira, G G; Delaere, C; du Pree, T; Favart, D; Forthomme, L; Giammanco, A; Hollar, J; Jez, P; Lemaitre, V; Liao, J; Militaru, O; Nuttens, C; Pagano, D; Pin, A; Piotrzkowski, K; Popov, A; Selvaggi, M; Vidal Marono, M; Garcia, J M Vizan; Beliy, N; Caebergs, T; Daubie, E; Hammad, G H; Alves, G A; Correa Martins Junior, M; Martins, T; Pol, M E; Souza, M H G; Aldá Júnior, W L; Carvalho, W; Chinellato, J; Custódio, A; Da Costa, E M; De Jesus Damiao, D; De Oliveira Martins, C; De Souza, S Fonseca; Malbouisson, H; Malek, M; Figueiredo, D Matos; Mundim, L; Nogima, H; Da Silva, W L Prado; Santoro, A; Sznajder, A; Manganote, E J Tonelli; Pereira, A Vilela; Dias, F A; Tomei, T R Fernandez Perez; Lagana, C; Novaes, S F; Padula, Sandra S; Bernardes, C A; Gregores, E M; Mercadante, P G; Genchev, V; Iaydjiev, P; Piperov, S; Rodozov, M; Sultanov, G; Vutova, M; Dimitrov, A; Hadjiiska, R; Kozhuharov, V; Litov, L; Pavlov, B; Petkov, P; Bian, J G; Chen, G M; Chen, H S; Jiang, C H; Liang, D; Liang, S; Meng, X; Tao, J; Wang, X; Wang, Z; Asawatangtrakuldee, C; Ban, Y; Guo, Y; Li, Q; Li, W; Liu, S; Mao, Y; Qian, S J; Wang, D; Zhang, L; Zou, W; Avila, C; Montoya, C A Carrillo; Sierra, L F Chaparro; Gomez, J P; Moreno, B Gomez; Sanabria, J C; Godinovic, N; Lelas, D; Plestina, R; Polic, D; Puljak, I; Antunovic, Z; Kovac, M; Brigljevic, V; Kadija, K; Luetic, J; Mekterovic, D; Morovic, S; Tikvica, L; Attikis, A; Mavromanolakis, G; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Finger, M; Finger, M; Abdelalim, A A; Assran, Y; Elgammal, S; Kamel, A Ellithi; Mahmoud, M A; Radi, A; Kadastik, M; Müntel, M; Murumaa, M; Raidal, M; Rebane, L; Tiko, A; Eerola, P; Fedi, G; Voutilainen, M; Härkönen, J; Karimäki, V; Kinnunen, R; Kortelainen, M J; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Peltola, T; Tuominen, E; Tuominiemi, J; Tuovinen, E; Wendland, L; Tuuva, T; Besancon, M; Couderc, F; Dejardin, M; Denegri, D; Fabbro, B; Faure, J L; Ferri, F; Ganjour, S; Givernaud, A; Gras, P; de Monchenault, G Hamel; Jarry, P; Locci, E; Malcles, J; Millischer, L; Nayak, A; Rander, J; Rosowsky, A; Titov, M; Baffioni, S; Beaudette, F; Benhabib, L; Bluj, M; Busson, P; Charlot, C; Daci, N; Dahms, T; Dalchenko, M; Dobrzynski, L; Florent, A; de Cassagnac, R Granier; Haguenauer, M; Miné, P; Mironov, C; Naranjo, I N; Nguyen, M; Ochando, C; Paganini, P; Sabes, D; Salerno, R; Sirois, Y; Veelken, C; Zabi, A; Agram, J-L; Andrea, J; Bloch, D; Brom, J-M; Chabert, E C; Collard, C; Conte, E; Drouhin, F; Fontaine, J-C; Gelé, D; Goerlach, U; Goetzmann, C; Juillot, P; Le Bihan, A-C; Van Hove, P; Gadrat, S; Beauceron, S; Beaupere, N; Boudoul, G; Brochet, S; Chasserat, J; Chierici, R; Contardo, D; Depasse, P; El Mamouni, H; Fan, J; Fay, J; Gascon, S; Gouzevitch, M; Ille, B; Kurca, T; Lethuillier, M; Mirabito, L; Perries, S; Sgandurra, L; Sordini, V; Vander Donckt, M; Verdier, P; Viret, S; Xiao, H; Tsamalaidze, Z; Autermann, C; Beranek, S; Bontenackels, M; Calpas, B; Edelhoff, M; Feld, L; Heracleous, N; Hindrichs, O; Klein, K; Ostapchuk, A; Perieanu, A; Raupach, F; Sammet, J; Schael, S; Sprenger, D; Weber, H; Wittmer, B; Zhukov, V; Ata, M; Caudron, J; Dietz-Laursonn, E; Duchardt, D; Erdmann, M; Fischer, R; Güth, A; Hebbeker, T; Heidemann, C; Hoepfner, K; Klingebiel, D; Knutzen, S; Kreuzer, P; Merschmeyer, M; Meyer, A; Olschewski, M; Padeken, K; Papacz, P; Pieta, H; Reithler, H; Schmitz, S A; Sonnenschein, L; Steggemann, J; Teyssier, D; Thüer, S; Weber, M; Cherepanov, V; Erdogan, Y; Flügge, G; Geenen, H; Geisler, M; Haj Ahmad, W; Hoehle, F; Kargoll, B; Kress, T; Kuessel, Y; Lingemann, J; Nowack, A; Nugent, I M; Perchalla, L; Pooth, O; Stahl, A; Asin, I; Bartosik, N; Behr, J; Behrenhoff, W; Behrens, U; Bell, A J; Bergholz, M; Bethani, A; Borras, K; Burgmeier, A; Cakir, A; Calligaris, L; Campbell, A; Choudhury, S; Costanza, F; Diez Pardos, C; Dooling, S; Dorland, T; Eckerlin, G; Eckstein, D; Flucke, G; Geiser, A; Glushkov, I; Grebenyuk, A; Gunnellini, P; Habib, S; Hauk, J; Hellwig, G; Horton, D; Jung, H; Kasemann, M; Katsas, P; Kleinwort, C; Kluge, H; Krämer, M; Krücker, D; Kuznetsova, E; Lange, W; Leonard, J; Lipka, K; Lohmann, W; Lutz, B; Mankel, R; Marfin, I; Melzer-Pellmann, I-A; Meyer, A B; Mnich, J; Mussgiller, A; Naumann-Emme, S; Novgorodova, O; Nowak, F; Olzem, J; Perrey, H; Petrukhin, A; Pitzl, D; Placakyte, R; Raspereza, A; Cipriano, P M Ribeiro; Riedl, C; Ron, E; Sahin, M Ö; Salfeld-Nebgen, J; Schmidt, R; Schoerner-Sadenius, T; Sen, N; Stein, M; Walsh, R; Wissing, C; Martin, M Aldaya; Blobel, V; Enderle, H; Erfle, J; Garutti, E; Gebbert, U; Görner, M; Gosselink, M; Haller, J; Heine, K; Höing, R S; Kaussen, G; Kirschenmann, H; Klanner, R; Kogler, R; Lange, J; Marchesini, I; Peiffer, T; Pietsch, N; Rathjens, D; Sander, C; Schettler, H; Schleper, P; Schlieckau, E; Schmidt, A; Schröder, M; Schum, T; Seidel, M; Sibille, J; Sola, V; Stadie, H; Steinbrück, G; Thomsen, J; Troendle, D; Usai, E; Vanelderen, L; Barth, C; Baus, C; Berger, J; Böser, C; Butz, E; Chwalek, T; De Boer, W; Descroix, A; Dierlamm, A; Feindt, M; Guthoff, M; Hartmann, F; Hauth, T; Held, H; Hoffmann, K H; Husemann, U; Katkov, I; Komaragiri, J R; Kornmayer, A; Lobelle Pardo, P; Martschei, D; Mozer, M U; Müller, Th; Niegel, M; Nürnberg, A; Oberst, O; Ott, J; Quast, G; Rabbertz, K; Ratnikov, F; Röcker, S; Schilling, F-P; Schott, G; Simonis, H J; Stober, F M; Ulrich, R; Wagner-Kuhr, J; Wayand, S; Weiler, T; Zeise, M; Anagnostou, G; Daskalakis, G; Geralis, T; Kesisoglou, S; Kyriakis, A; Loukas, D; Markou, A; Markou, C; Ntomari, E; Topsis-Giotis, I; Gouskos, L; Panagiotou, A; Saoulidou, N; Stiliaris, E; Aslanoglou, X; Evangelou, I; Flouris, G; Foudas, C; Kokkas, P; Manthos, N; Papadopoulos, I; Paradas, E; Bencze, G; Hajdu, C; Hidas, P; Horvath, D; Sikler, F; Veszpremi, V; Vesztergombi, G; Zsigmond, A J; Beni, N; Czellar, S; Molnar, J; Palinkas, J; Szillasi, Z; Karancsi, J; Raics, P; Trocsanyi, Z L; Ujvari, B; Swain, S K; Beri, S B; Bhatnagar, V; Dhingra, N; Gupta, R; Kaur, M; Mehta, M Z; Mittal, M; Nishu, N; Sharma, A; Singh, J B; Kumar, Ashok; Kumar, Arun; Ahuja, S; Bhardwaj, A; Choudhary, B C; Kumar, A; Malhotra, S; Naimuddin, M; Ranjan, K; Saxena, P; Sharma, V; Shivpuri, R K; Banerjee, S; Bhattacharya, S; Chatterjee, K; Dutta, S; Gomber, B; Jain, Sa; Jain, Sh; Khurana, R; Modak, A; Mukherjee, S; Roy, D; Sarkar, S; Sharan, M; Singh, A P; Abdulsalam, A; Dutta, D; Kailas, S; Kumar, V; Mohanty, A K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Chatterjee, R M; Ganguly, S; Ghosh, S; Guchait, M; Gurtu, A; Kole, G; Kumar, S; Maity, M; Majumder, G; Mazumdar, K; Mohanty, G B; Parida, B; Sudhakar, K; Wickramage, N; Dugad, S; Arfaei, H; Bakhshiansohi, H; Etesami, S M; Fahim, A; Jafari, A; Khakzad, M; Najafabadi, M Mohammadi; Mehdiabadi, S Paktinat; Safarzadeh, B; Zeinali, M; Grunewald, M; Abbrescia, M; Barbone, L; Calabria, C; Chhibra, S S; Colaleo, A; Creanza, D; De Filippis, N; De Palma, M; Fiore, L; Iaselli, G; Maggi, G; Maggi, M; Marangelli, B; My, S; Nuzzo, S; Pacifico, N; Pompili, A; Pugliese, G; Selvaggi, G; Silvestris, L; Singh, G; Venditti, R; Verwilligen, P; Zito, G; Abbiendi, G; Benvenuti, A C; Bonacorsi, D; Braibant-Giacomelli, S; Brigliadori, L; Campanini, R; Capiluppi, P; Castro, A; Cavallo, F R; Codispoti, G; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Grandi, C; Guiducci, L; Marcellini, S; Masetti, G; Meneghelli, M; Montanari, A; Navarria, F L; Odorici, F; Perrotta, A; Primavera, F; Rossi, A M; Rovelli, T; Siroli, G P; Tosi, N; Travaglini, R; Albergo, S; Cappello, G; Chiorboli, M; Costa, S; Giordano, F; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Frosali, S; Gallo, E; Gonzi, S; Gori, V; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Benussi, L; Bianco, S; Fabbri, F; Piccolo, D; Fabbricatore, P; Ferretti, R; Ferro, F; Vetere, M Lo; Musenich, R; Robutti, E; Tosi, S; Benaglia, A; Dinardo, M E; Fiorendi, S; Gennai, S; Ghezzi, A; Govoni, P; Lucchini, M T; Malvezzi, S; Manzoni, R A; Martelli, A; Menasce, D; Moroni, L; Paganoni, M; Pedrini, D; Ragazzi, S; Redaelli, N; de Fatis, T Tabarelli; Buontempo, S; Cavallo, N; De Cosa, A; Fabozzi, F; Iorio, A O M; Lista, L; Meola, S; Merola, M; Paolucci, P; Azzi, P; Bacchetta, N; Bellato, M; Bisello, D; Branca, A; Carlin, R; Checchia, P; Dorigo, T; Dosselli, U; Galanti, M; Gasparini, F; Gasparini, U; Giubilato, P; Gozzelino, A; Kanishchev, K; Lacaprara, S; Lazzizzera, I; Margoni, M; Meneguzzo, A T; Pazzini, J; Pozzobon, N; Ronchese, P; Sgaravatto, M; Simonetto, F; Torassa, E; Tosi, M; Triossi, A; Zotto, P; Zucchetta, A; Zumerle, G; Gabusi, M; Ratti, S P; Riccardi, C; Vitulo, P; Biasini, M; Bilei, G M; Fanò, L; Lariccia, P; Mantovani, G; Menichelli, M; Nappi, A; Romeo, F; Saha, A; Santocchia, A; Spiezia, A; Androsov, K; Azzurri, P; Bagliesi, G; Bernardini, J; Boccali, T; Broccolo, G; Castaldi, R; Ciocci, M A; D'Agnolo, R T; Dell'Orso, R; Fiori, F; Foà, L; Giassi, A; Grippo, M T; Kraan, A; Ligabue, F; Lomtadze, T; Martini, L; Messineo, A; Moon, C S; Palla, F; Rizzi, A; Savoy-Navarro, A; Serban, A T; Spagnolo, P; Squillacioti, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Vernieri, C; Barone, L; Cavallari, F; Del Re, D; Diemoz, M; Grassi, M; Longo, E; Margaroli, F; Meridiani, P; Micheli, F; Nourbakhsh, S; Organtini, G; Paramatti, R; Rahatlou, S; Rovelli, C; Soffi, L; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Bellan, R; Biino, C; Cartiglia, N; Casasso, S; Costa, M; Degano, A; Demaria, N; Mariotti, C; Maselli, S; Migliore, E; Monaco, V; Musich, M; Obertino, M M; Pastrone, N; Pelliccioni, M; Potenza, A; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Tamponi, U; Belforte, S; Candelise, V; Casarsa, M; Cossutti, F; Ricca, G Della; Gobbo, B; La Licata, C; Marone, M; Montanino, D; Penzo, A; Schizzi, A; Zanetti, A; Chang, S; Kim, T Y; Nam, S K; Kim, D H; Kim, G N; Kim, J E; Kong, D J; Lee, S; Oh, Y D; Park, H; Son, D C; Kim, J Y; Kim, Zero J; Song, S; Choi, S; Gyun, D; Hong, B; Jo, M; Kim, H; Kim, T J; Lee, K S; Park, S K; Roh, Y; Choi, M; Kim, J H; Park, C; Park, I C; Park, S; Ryu, G; Choi, Y; Choi, Y K; Goh, J; Kim, M S; Kwon, E; Lee, B; Lee, J; Seo, H; Yu, I; Grigelionis, I; Juodagalvis, A; Castilla-Valdez, H; De La Cruz-Burelo, E; Heredia-de La Cruz, I; Lopez-Fernandez, R; Martínez-Ortega, J; Sanchez-Hernandez, A; Villasenor-Cendejas, L M; Moreno, S Carrillo; Valencia, F Vazquez; Ibarguen, H A Salazar; Linares, E Casimiro; Pineda, A Morelos; Reyes-Santos, M A; Krofcheck, D; Butler, P H; Doesburg, R; Reucroft, S; Silverwood, H; Ahmad, M; Asghar, M I; Butt, J; Hoorani, H R; Khalid, S; Khan, W A; Khurshid, T; Qazi, S; Shah, M A; Shoaib, M; Bialkowska, H; Boimska, B; Frueboes, T; Górski, M; Kazana, M; Nawrocki, K; Romanowska-Rybinska, K; Szleper, M; Wrochna, G; Zalewski, P; Brona, G; Bunkowski, K; Cwiok, M; Dominik, W; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Misiura, M; Wolszczak, W; Almeida, N; Bargassa, P; Da Cruz E Silva, C Beirão; Faccioli, P; Parracho, P G Ferreira; Gallinaro, M; Nguyen, F; Antunes, J Rodrigues; Seixas, J; Varela, J; Vischia, P; Afanasiev, S; Bunin, P; Gavrilenko, M; Golutvin, I; Gorbunov, I; Kamenev, A; Karjavin, V; Konoplyanikov, V; Lanev, A; Malakhov, A; Matveev, V; Moisenz, P; Palichik, V; Perelygin, V; Shmatov, S; Skatchkov, N; Smirnov, V; Zarubin, A; Evstyukhin, S; Golovtsov, V; Ivanov, Y; Kim, V; Levchenko, P; Murzin, V; Oreshkin, V; Smirnov, I; Sulimov, V; Uvarov, L; Vavilov, S; Vorobyev, A; Vorobyev, An; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Kirsanov, M; Krasnikov, N; Pashenkov, A; Tlisov, D; Toropin, A; Epshteyn, V; Erofeeva, M; Gavrilov, V; Lychkovskaya, N; Popov, V; Safronov, G; Semenov, S; Spiridonov, A; Stolin, V; Vlasov, E; Zhokin, A; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Leonidov, A; Mesyats, G; Rusakov, S V; Vinogradov, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Markina, A; Obraztsov, S; Petrushanko, S; Savrin, V; Snigirev, A; Azhgirey, I; Bayshev, I; Bitioukov, S; Kachanov, V; Kalinin, A; Konstantinov, D; Krychkine, V; Petrov, V; Ryutin, R; Sobol, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Djordjevic, M; Ekmedzic, M; Krpic, D; Milosevic, J; Aguilar-Benitez, M; Maestre, J Alcaraz; Battilana, C; Calvo, E; Cerrada, M; Llatas, M Chamizo; Colino, N; De La Cruz, B; Peris, A Delgado; Vázquez, D Domínguez; Bedoya, C Fernandez; Ramos, J P Fernández; Ferrando, A; Flix, J; Fouz, M C; Garcia-Abia, P; Lopez, O Gonzalez; Lopez, S Goy; Hernandez, J M; Josa, M I; Merino, G; De Martino, E Navarro; Pelayo, J Puerta; Olmeda, A Quintario; Redondo, I; Romero, L; Santaolalla, J; Soares, M S; Willmott, C; Albajar, C; de Trocóniz, J F; Brun, H; Cuevas, J; Menendez, J Fernandez; Folgueras, S; Caballero, I Gonzalez; Iglesias, L Lloret; Gomez, J Piedra; Cifuentes, J A Brochero; Cabrillo, I J; Calderon, A; Chuang, S H; Campderros, J Duarte; Fernandez, M; Gomez, G; Sanchez, J Gonzalez; Graziano, A; Jorda, C; Virto, A Lopez; Marco, J; Marco, R; Rivero, C Martinez; Matorras, F; Sanchez, F J Munoz; Rodrigo, T; Rodríguez-Marrero, A Y; Ruiz-Jimeno, A; Scodellaro, L; Vila, I; Cortabitarte, R Vilar; Abbaneo, D; Auffray, E; Auzinger, G; Bachtis, M; Baillon, P; Ball, A H; Barney, D; Bendavid, J; Benitez, J F; Bernet, C; Bianchi, G; Bloch, P; Bocci, A; Bonato, A; Bondu, O; Botta, C; Breuker, H; Camporesi, T; Cerminara, G; Christiansen, T; Perez, J A Coarasa; Colafranceschi, S; D'Alfonso, M; d'Enterria, D; Dabrowski, A; David, A; Guio, F De; De Roeck, A; De Visscher, S; Di Guida, S; Dobson, M; Dupont-Sagorin, N; Elliott-Peisert, A; Eugster, J; Franzoni, G; Funk, W; Georgiou, G; Giffels, M; Gigi, D; Gill, K; Giordano, D; Girone, M; Giunta, M; Glege, F; Garrido, R Gomez-Reino; Gowdy, S; Guida, R; Hammer, J; Hansen, M; Harris, P; Hartl, C; Hinzmann, A; Innocente, V; Janot, P; Karavakis, E; Kousouris, K; Krajczar, K; Lecoq, P; Lee, Y-J; Lourenço, C; Magini, N; Malgeri, L; Mannelli, M; Masetti, L; Meijers, F; Mersi, S; Meschi, E; Moser, R; Mulders, M; Musella, P; Nesvold, E; Orsini, L; Cortezon, E Palencia; Perez, E; Perrozzi, L; Petrilli, A; Pfeiffer, A; Pierini, M; Pimiä, M; Piparo, D; Plagge, M; Quertenmont, L; Racz, A; Reece, W; Rolandi, G; Rovere, M; Sakulin, H; Santanastasio, F; Schäfer, C; Schwick, C; Sekmen, S; Siegrist, P; Silva, P; Simon, M; Sphicas, P; Spiga, D; Stieger, B; Stoye, M; Tsirou, A; Veres, G I; Vlimant, J R; Wöhri, H K; Worm, S D; Zeuner, W D; Bertl, W; Deiters, K; Erdmann, W; Gabathuler, K; Horisberger, R; Ingram, Q; Kaestli, H C; König, S; Kotlinski, D; Langenegger, U; Renker, D; Rohe, T; Bachmair, F; Bäni, L; Bianchini, L; Bortignon, P; Buchmann, M A; Casal, B; Chanon, N; Deisher, A; Dissertori, G; Dittmar, M; Donegà, M; Dünser, M; Eller, P; Freudenreich, K; Grab, C; Hits, D; Lecomte, P; Lustermann, W; Mangano, B; Marini, A C; Del Arbol, P Martinez Ruiz; Meister, D; Mohr, N; Moortgat, F; Nägeli, C; Nef, P; Nessi-Tedaldi, F; Pandolfi, F; Pape, L; Pauss, F; Peruzzi, M; Quittnat, M; Ronga, F J; Rossini, M; Sala, L; Sanchez, A K; Starodumov, A; Takahashi, M; Tauscher, L; Thea, A; Theofilatos, K; Treille, D; Urscheler, C; Wallny, R; Weber, H A; Amsler, C; Chiochia, V; Favaro, C; Rikova, M Ivova; Kilminster, B; Mejias, B Millan; Otiougova, P; Robmann, P; Snoek, H; Taroni, S; Verzetti, M; Yang, Y; Cardaci, M; Chen, K H; Ferro, C; Kuo, C M; Li, S W; Lin, W; Lu, Y J; Volpe, R; Yu, S S; Bartalini, P; Chang, P; Chang, Y H; Chang, Y W; Chao, Y; Chen, K F; Dietz, C; Grundler, U; Hou, W-S; Hsiung, Y; Kao, K Y; Lei, Y J; Lu, R-S; Majumder, D; Petrakou, E; Shi, X; Shiu, J G; Tzeng, Y M; Wang, M; Asavapibhop, B; Suwonjandee, N; Adiguzel, A; Bakirci, M N; Cerci, S; Dozen, C; Dumanoglu, I; Eskut, E; Girgis, S; Gokbulut, G; Gurpinar, E; Hos, I; Kangal, E E; Topaksu, A Kayis; Onengut, G; Ozdemir, K; Ozturk, S; Polatoz, A; Sogut, K; Cerci, D Sunar; Tali, B; Topakli, H; Vergili, M; Akin, I V; Aliev, T; Bilin, B; Bilmis, S; Deniz, M; Gamsizkan, H; Guler, A M; Karapinar, G; Ocalan, K; Ozpineci, A; Serin, M; Sever, R; Surat, U E; Yalvac, M; Zeyrek, M; Gülmez, E; Isildak, B; Kaya, M; Kaya, O; Ozkorucuklu, S; Sonmez, N; Bahtiyar, H; Barlas, E; Cankocak, K; Günaydin, Y O; Vardarlı, F I; Yücel, M; Levchuk, L; Sorokin, P; Brooke, J J; Clement, E; Cussans, D; Flacher, H; Frazier, R; Goldstein, J; Grimes, M; Heath, G P; Heath, H F; Kreczko, L; Lucas, C; Meng, Z; Metson, S; Newbold, D M; Nirunpong, K; Paramesvaran, S; Poll, A; Senkin, S; Smith, V J; Williams, T; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Ilic, J; Olaiya, E; Petyt, D; Radburn-Smith, B C; Shepherd-Themistocleous, C H; Tomalin, I R; Womersley, W J; Bainbridge, R; Buchmuller, O; Burton, D; Colling, D; Cripps, N; Cutajar, M; Dauncey, P; Davies, G; Negra, M Della; Ferguson, W; Fulcher, J; Futyan, D; Gilbert, A; Bryer, A Guneratne; Hall, G; Hatherell, Z; Hays, J; Iles, G; Jarvis, M; Karapostoli, G; Kenzie, M; Lane, R; Lucas, R; Lyons, L; Magnan, A-M; Marrouche, J; Mathias, B; Nandi, R; Nash, J; Nikitenko, A; Pela, J; Pesaresi, M; Petridis, K; Pioppi, M; Raymond, D M; Rogerson, S; Rose, A; Seez, C; Sharp, P; Sparrow, A; Tapper, A; Acosta, M Vazquez; Virdee, T; Wakefield, S; Wardle, N; Chadwick, M; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Leggat, D; Leslie, D; Martin, W; Reid, I D; Symonds, P; Teodorescu, L; Turner, M; Dittmann, J; Hatakeyama, K; Kasmi, A; Liu, H; Scarborough, T; Charaf, O; Cooper, S I; Henderson, C; Rumerio, P; Avetisyan, A; Bose, T; Fantasia, C; Heister, A; Lawson, P; Lazic, D; Rohlf, J; Sperka, D; St John, J; Sulak, L; Alimena, J; Christopher, G; Cutts, D; Demiragli, Z; Ferapontov, A; Garabedian, A; Heintz, U; Jabeen, S; Kukartsev, G; Laird, E; Landsberg, G; Luk, M; Narain, M; Segala, M; Sinthuprasith, T; Speer, T; Breedon, R; Breto, G; De La Barca Sanchez, M Calderon; Chauhan, S; Chertok, M; Conway, J; Conway, R; Cox, P T; Erbacher, R; Gardner, M; Houtz, R; Ko, W; Kopecky, A; Lander, R; Miceli, T; Pellett, D; Pilot, J; Ricci-Tam, F; Rutherford, B; Searle, M; Shalhout, S; Smith, J; Squires, M; Tripathi, M; Wilbur, S; Yohay, R; Andreev, V; Cline, D; Cousins, R; Erhan, S; Everaerts, P; Farrell, C; Felcini, M; Hauser, J; Ignatenko, M; Jarvis, C; Rakness, G; Schlein, P; Takasugi, E; Traczyk, P; Valuev, V; Babb, J; Clare, R; Ellison, J; Gary, J W; Hanson, G; Heilman, J; Jandir, P; Liu, H; Long, O R; Luthra, A; Malberti, M; Nguyen, H; Shrinivas, A; Sturdy, J; Sumowidagdo, S; Wilken, R; Wimpenny, S; Andrews, W; Branson, J G; Cerati, G B; Cittolin, S; Evans, D; Holzner, A; Kelley, R; Lebourgeois, M; Letts, J; Macneill, I; Padhi, S; Palmer, C; Petrucciani, G; Pieri, M; Sani, M; Simon, S; Sudano, E; Tadel, M; Tu, Y; Vartak, A; Wasserbaech, S; Würthwein, F; Yagil, A; Yoo, J; Barge, D; Campagnari, C; Danielson, T; Flowers, K; Geffert, P; George, C; Golf, F; Incandela, J; Justus, C; Kovalskyi, D; Krutelyov, V; Villalba, R Magaña; Mccoll, N; Pavlunin, V; Richman, J; Rossin, R; Stuart, D; To, W; West, C; Apresyan, A; Bornheim, A; Bunn, J; Chen, Y; Di Marco, E; Duarte, J; Kcira, D; Ma, Y; Mott, A; Newman, H B; Pena, C; Rogan, C; Spiropulu, M; Timciuc, V; Veverka, J; Wilkinson, R; Xie, S; Zhu, R Y; Azzolini, V; Calamba, A; Carroll, R; Ferguson, T; Iiyama, Y; Jang, D W; Liu, Y F; Paulini, M; Russ, J; Vogel, H; Vorobiev, I; Cumalat, J P; Drell, B R; Ford, W T; Gaz, A; Lopez, E Luiggi; Nauenberg, U; Smith, J G; Stenson, K; Ulmer, K A; Wagner, S R; Alexander, J; Chatterjee, A; Eggert, N; Gibbons, L K; Hopkins, W; Khukhunaishvili, A; Kreis, B; Mirman, N; Kaufman, G Nicolas; Patterson, J R; Ryd, A; Salvati, E; Sun, W; Teo, W D; Thom, J; Thompson, J; Tucker, J; Weng, Y; Winstrom, L; Wittich, P; Winn, D; Abdullin, S; Albrow, M; Anderson, J; Apollinari, G; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Burkett, K; Butler, J N; Chetluru, V; Cheung, H W K; Chlebana, F; Cihangir, S; Elvira, V D; Fisk, I; Freeman, J; Gao, Y; Gottschalk, E; Gray, L; Green, D; Gutsche, O; Hare, D; Harris, R M; Hirschauer, J; Hooberman, B; Jindariani, S; Johnson, M; Joshi, U; Kaadze, K; Klima, B; Kunori, S; Kwan, S; Linacre, J; Lincoln, D; Lipton, R; Lykken, J; Maeshima, K; Marraffino, J M; Outschoorn, V I Martinez; Maruyama, S; Mason, D; McBride, P; Mishra, K; Mrenna, S; Musienko, Y; Newman-Holmes, C; O'Dell, V; Prokofyev, O; Ratnikova, N; Sexton-Kennedy, E; Sharma, S; Spalding, W J; Spiegel, L; Taylor, L; Tkaczyk, S; Tran, N V; Uplegger, L; Vaandering, E W; Vidal, R; Whitmore, J; Wu, W; Yang, F; Yun, J C; Acosta, D; Avery, P; Bourilkov, D; Chen, M; Cheng, T; Das, S; De Gruttola, M; Di Giovanni, G P; Dobur, D; Drozdetskiy, A; Field, R D; Fisher, M; Fu, Y; Furic, I K; Hugon, J; Kim, B; Konigsberg, J; Korytov, A; Kropivnitskaya, A; Kypreos, T; Low, J F; Matchev, K; Milenovic, P; Mitselmakher, G; Muniz, L; Remington, R; Rinkevicius, A; Skhirtladze, N; Snowball, M; Yelton, J; Zakaria, M; Gaultney, V; Hewamanage, S; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, T; Askew, A; Bochenek, J; Chen, J; Diamond, B; Haas, J; Hagopian, S; Hagopian, V; Johnson, K F; Prosper, H; Veeraraghavan, V; Weinberg, M; Baarmand, M M; Dorney, B; Hohlmann, M; Kalakhety, H; Yumiceva, F; Adams, M R; Apanasevich, L; Bazterra, V E; Betts, R R; Bucinskaite, I; Callner, J; Cavanaugh, R; Evdokimov, O; Gauthier, L; Gerber, C E; Hofman, D J; Khalatyan, S; Kurt, P; Lacroix, F; Moon, D H; O'Brien, C; Silkworth, C; Strom, D; Turner, P; Varelas, N; Akgun, U; Albayrak, E A; Bilki, B; Clarida, W; Dilsiz, K; Duru, F; Griffiths, S; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Newsom, C R; Ogul, H; Onel, Y; Ozok, F; Sen, S; Tan, P; Tiras, E; Wetzel, J; Yetkin, T; Yi, K; Barnett, B A; Blumenfeld, B; Bolognesi, S; Giurgiu, G; Gritsan, A V; Hu, G; Maksimovic, P; Martin, C; Swartz, M; Whitbeck, A; Baringer, P; Bean, A; Benelli, G; Kenny, R P; Murray, M; Noonan, D; Sanders, S; Stringer, R; Wood, J S; Barfuss, A F; Chakaberia, I; Ivanov, A; Khalil, S; Makouski, M; Maravin, Y; Saini, L K; Shrestha, S; Svintradze, I; Gronberg, J; Lange, D; Rebassoo, F; Wright, D; Baden, A; Calvert, B; Eno, S C; Gomez, J A; Hadley, N J; Kellogg, R G; Kolberg, T; Lu, Y; Marionneau, M; Mignerey, A C; Pedro, K; Peterman, A; Skuja, A; Temple, J; Tonjes, M B; Tonwar, S C; Apyan, A; Bauer, G; Busza, W; Cali, I A; Chan, M; Di Matteo, L; Dutta, V; Gomez Ceballos, G; Goncharov, M; Gulhan, D; Kim, Y; Klute, M; Lai, Y S; Levin, A; Luckey, P D; Ma, T; Nahn, S; Paus, C; Ralph, D; Roland, C; Roland, G; Stephans, G S F; Stöckli, F; Sumorok, K; Velicanu, D; Wolf, R; Wyslouch, B; Yang, M; Yilmaz, Y; Yoon, A S; Zanetti, M; Zhukova, V; Dahmes, B; De Benedetti, A; Gude, A; Haupt, J; Kao, S C; Klapoetke, K; Kubota, Y; Mans, J; Pastika, N; Rusack, R; Sasseville, M; Singovsky, A; Tambe, N; Turkewitz, J; Acosta, J G; Cremaldi, L M; Kroeger, R; Oliveros, S; Perera, L; Rahmat, R; Sanders, D A; Summers, D; Avdeeva, E; Bloom, K; Bose, S; Claes, D R; Dominguez, A; Eads, M; Suarez, R Gonzalez; Keller, J; Kravchenko, I; Lazo-Flores, J; Malik, S; Meier, F; Snow, G R; Dolen, J; Godshalk, A; Iashvili, I; Jain, S; Kharchilava, A; Rappoccio, S; Wan, Z; Alverson, G; Barberis, E; Baumgartel, D; Chasco, M; Haley, J; Massironi, A; Nash, D; Orimoto, T; Trocino, D; Wood, D; Zhang, J; Anastassov, A; Hahn, K A; Kubik, A; Lusito, L; Mucia, N; Odell, N; Pollack, B; Pozdnyakov, A; Schmitt, M; Stoynev, S; Sung, K; Velasco, M; Won, S; Berry, D; Brinkerhoff, A; Chan, K M; Hildreth, M; Jessop, C; Karmgard, D J; Kolb, J; Lannon, K; Luo, W; Lynch, S; Marinelli, N; Morse, D M; Pearson, T; Planer, M; Ruchti, R; Slaunwhite, J; Valls, N; Wayne, M; Wolf, M; Antonelli, L; Bylsma, B; Durkin, L S; Hill, C; Hughes, R; Kotov, K; Ling, T Y; Puigh, D; Rodenburg, M; Smith, G; Vuosalo, C; Winer, B L; Wolfe, H; Berry, E; Elmer, P; Halyo, V; Hebda, P; Hegeman, J; Hunt, A; Jindal, P; Koay, S A; Lujan, P; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Piroué, P; Quan, X; Raval, A; Saka, H; Stickland, D; Tully, C; Werner, J S; Zenz, S C; Zuranski, A; Brownson, E; Lopez, A; Mendez, H; Vargas, J E Ramirez; Alagoz, E; Benedetti, D; Bolla, G; Bortoletto, D; De Mattia, M; Everett, A; Hu, Z; Jones, M; Jung, K; Koybasi, O; Kress, M; Leonardo, N; Pegna, D Lopes; Maroussov, V; Merkel, P; Miller, D H; Neumeister, N; Shipsey, I; Silvers, D; Svyatkovskiy, A; Wang, F; Xie, W; Xu, L; Yoo, H D; Zablocki, J; Zheng, Y; Parashar, N; Adair, A; Akgun, B; Ecklund, K M; Geurts, F J M; Li, W; Michlin, B; Padley, B P; Redjimi, R; Roberts, J; Zabel, J; Betchart, B; Bodek, A; Covarelli, R; de Barbaro, P; Demina, R; Eshaq, Y; Ferbel, T; Garcia-Bellido, A; Goldenzweig, P; Han, J; Harel, A; Miner, D C; Petrillo, G; Vishnevskiy, D; Zielinski, M; Bhatti, A; Ciesielski, R; Demortier, L; Goulianos, K; Lungu, G; Malik, S; Mesropian, C; Arora, S; Barker, A; Chou, J P; Contreras-Campana, C; Contreras-Campana, E; Duggan, D; Ferencek, D; Gershtein, Y; Gray, R; Halkiadakis, E; Hidas, D; Lath, A; Panwalkar, S; Park, M; Patel, R; Rekovic, V; Robles, J; Salur, S; Schnetzer, S; Seitz, C; Somalwar, S; Stone, R; Thomas, S; Thomassen, P; Walker, M; Cerizza, G; Hollingsworth, M; Rose, K; Spanier, S; Yang, Z C; York, A; Bouhali, O; Eusebi, R; Flanagan, W; Gilmore, J; Kamon, T; Khotilovich, V; Montalvo, R; Osipenkov, I; Pakhotin, Y; Perloff, A; Roe, J; Safonov, A; Sakuma, T; Suarez, I; Tatarinov, A; Toback, D; Akchurin, N; Cowden, C; Damgov, J; Dragoiu, C; Dudero, P R; Kovitanggoon, K; Lee, S W; Libeiro, T; Volobouev, I; Appelt, E; Delannoy, A G; Greene, S; Gurrola, A; Johns, W; Maguire, C; Melo, A; Sharma, M; Sheldon, P; Snook, B; Tuo, S; Velkovska, J; Arenton, M W; Boutle, S; Cox, B; Francis, B; Goodell, J; Hirosky, R; Ledovskoy, A; Lin, C; Neu, C; Wood, J; Gollapinni, S; Harr, R; Karchin, P E; Kottachchi Kankanamge Don, C; Lamichhane, P; Sakharov, A; Belknap, D A; Borrello, L; Carlsmith, D; Cepeda, M; Dasu, S; Duric, S; Friis, E; Grothe, M; Hall-Wilton, R; Herndon, M; Hervé, A; Klabbers, P; Klukas, J; Lanaro, A; Loveless, R; Mohapatra, A; Ojalvo, I; Perry, T; Pierro, G A; Polese, G; Ross, I; Sarangi, T; Savin, A; Smith, W H; Swanson, J

    A study of color coherence effects in pp collisions at a center-of-mass energy of 7[Formula: see text] is presented. The data used in the analysis were collected in 2010 with the CMS detector at the LHC and correspond to an integrated luminosity of 36 pb[Formula: see text]. Events are selected that contain at least three jets and where the two jets with the largest transverse momentum exhibit a back-to-back topology. The measured angular correlation between the second- and third-leading jet is shown to be sensitive to color coherence effects, and is compared to the predictions of Monte Carlo models with various implementations of color coherence. None of the models describe the data satisfactorily.

  11. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  12. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    NASA Astrophysics Data System (ADS)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  13. Influence of Bulk Chemical Composition on Relative Sensitivity Factors for 55Mn/52Cr by SIMS: Implications for the 53Mn-53Cr Chronometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzel, J; Jacobsen, B; Hutcheon, I D

    2009-09-09

    The {sup 53}Mn-{sup 53}Cr systematics of meteorite samples provide an important high resolution chronometer for early solar system events. Accurate determination of the initial abundance of {sup 53}Mn ({tau}{sub 1/2} = 3.7 Ma) by secondary ion mass spectrometry (SIMS) is dependent on properly correcting for differing ion yields between Mn and Cr by use of a relative sensitivity factor (RSF). Ideal standards for SIMS analysis should be compositionally and structurally similar to the sample of interest. However, previously published Mn-Cr studies rely on few standards (e.g., San Carlos olivine, NIST 610 glass) despite significant variations in chemical composition. We investigatemore » a potential correlation between RSF and bulk chemical composition by determining RSFs for {sup 55}Mn/{sup 52}Cr in 11 silicate glass and mineral standards (San Carlos olivine, Mainz glasses KL2-G, ML3B-G, StHs6/80-G, GOR128-G, BM90/21-G, and T1-G, NIST 610 glass, and three LLNL pyroxene-composition glasses). All standards were measured on the Cameca ims-3f ion microprobe at LLNL, and a subset were also measured on the Cameca ims-1270 ion microprobe at the Geological Survey of Japan. The standards cover a range of bulk chemical compositions with SiO{sub 2} contents of 40-71 wt.%, FeO contents of 0.05-20 wt.% and Mn/Cr ratios between 0.4 and 58. We obtained RSF values ranging from 0.83 to 1.15. The data obtained on the ims-1270 ion microprobe are within {approx}10% of the RSF values obtained on the ims-3f ion microprobe, and the RSF determined for San Carlos olivine (0.86) is in good agreement with previously published data. The typical approach to calculating an RSF from multiple standard measurements involves making a linear fit to measured {sup 55}Mn/{sup 52}Cr versus true {sup 55}Mn/{sup 52}Cr. This approach may be satisfactory for materials of similar composition, but fails when compositions vary significantly. This is best illustrated by the {approx}30% change in RSF we see between glasses with similar Mn/Cr ratios but variable Fe and Na content. We are developing an approach that uses multivariate analysis to evaluate the importance of different chemical components in controlling the RSF and predict the RSF of unknowns when standards of appropriate composition are not available. Our analysis suggests that Fe, Si, and Na are key compositional factors in these silicate standards. The RSF is positively correlated with Fe and Si and negatively correlated with Na. Work is currently underway to extend this analysis to a wider range of chemical compositions and to evaluate the variability of RSF on measurements obtained by NanoSIMS.« less

  14. Environmental radionuclides as contaminants of HPGe gamma-ray spectrometers: Monte Carlo simulations for Modane underground laboratory.

    PubMed

    Breier, R; Brudanin, V B; Loaiza, P; Piquemal, F; Povinec, P P; Rukhadze, E; Rukhadze, N; Štekl, I

    2018-05-21

    The main limitation in the high-sensitive HPGe gamma-ray spectrometry has been the detector background, even for detectors placed deep underground. Environmental radionuclides such as 40 K and decay products in the 238 U and 232 Th chains have been identified as the most important radioactive contaminants of construction parts of HPGe gamma-ray spectrometers. Monte Carlo simulations have shown that the massive inner and outer lead shields have been the main contributors to the HPGe-detector background, followed by aluminum cryostat, copper cold finger, detector holder and the lead ring with FET. The Monte Carlo simulated cosmic-ray background gamma-ray spectrum has been by about three orders of magnitude lower than the experimental spectrum measured in the Modane underground laboratory (4800 m w.e.), underlying the importance of using radiopure materials for the construction of ultra-low-level HPGe gamma-ray spectrometers. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) of the rare earth elements (REEs) in beneficiation rare earth waste from the gold processing: case study

    NASA Astrophysics Data System (ADS)

    Bieda, Bogusław; Grzesik, Katarzyna

    2017-11-01

    The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.

  16. Cost-effectiveness of targeted screening for abdominal aortic aneurysm. Monte Carlo-based estimates.

    PubMed

    Pentikäinen, T J; Sipilä, T; Rissanen, P; Soisalon-Soininen, S; Salo, J

    2000-01-01

    This article reports a cost-effectiveness analysis of targeted screening for abdominal aortic aneurysm (AAA). A major emphasis was on the estimation of distributions of costs and effectiveness. We performed a Monte Carlo simulation using C programming language in a PC environment. Data on survival and costs, and a majority of screening probabilities, were from our own empirical studies. Natural history data were based on the literature. Each screened male gained 0.07 life-years at an incremental cost of FIM 3,300. The expected values differed from zero very significantly. For females, expected gains were 0.02 life-years at an incremental cost of FIM 1,100, which was not statistically significant. Cost-effectiveness ratios and their 95% confidence intervals were FIM 48,000 (27,000-121,000) and 54,000 (22,000-infinity) for males and females, respectively. Sensitivity analysis revealed that the results for males were stable. Individual variation in life-year gains was high. Males seemed to benefit from targeted AAA screening, and the results were stable. As far as the cost-effectiveness ratio is considered acceptable, screening for males seemed to be justified. However, our assumptions about growth and rupture behavior of AAAs might be improved with further clinical and epidemiological studies. As a point estimate, females benefited in a similar manner, but the results were not statistically significant. The evidence of this study did not justify screening of females.

  17. Analysis of Intervention Strategies for Inhalation Exposure to Polycyclic Aromatic Hydrocarbons and Associated Lung Cancer Risk Based on a Monte Carlo Population Exposure Assessment Model

    PubMed Central

    Zhou, Bin; Zhao, Bin

    2014-01-01

    It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making. PMID:24416436

  18. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  19. Top Quark Mass Calibration for Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W.

    2016-12-01

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mtMC. Because of hadronization and parton-shower dynamics, relating mtMC to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e+e- 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, mtMC differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, mtMC≃mt,1 GeV MSR .

  20. A hybrid multi-objective imperialist competitive algorithm and Monte Carlo method for robust safety design of a rail vehicle

    NASA Astrophysics Data System (ADS)

    Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi

    2017-10-01

    This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.

  1. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  2. A Bayesian network meta-analysis for binary outcome: how to do it.

    PubMed

    Greco, Teresa; Landoni, Giovanni; Biondi-Zoccai, Giuseppe; D'Ascenzo, Fabrizio; Zangrillo, Alberto

    2016-10-01

    This study presents an overview of conceptual and practical issues of a network meta-analysis (NMA), particularly focusing on its application to randomised controlled trials with a binary outcome of interest. We start from general considerations on NMA to specifically appraise how to collect study data, structure the analytical network and specify the requirements for different models and parameter interpretations, with the ultimate goal of providing physicians and clinician-investigators a practical tool to understand pros and cons of NMA. Specifically, we outline the key steps, from the literature search to sensitivity analysis, necessary to perform a valid NMA of binomial data, exploiting Markov Chain Monte Carlo approaches. We also apply this analytical approach to a case study on the beneficial effects of volatile agents compared to total intravenous anaesthetics for surgery to further clarify the statistical details of the models, diagnostics and computations. Finally, datasets and models for the freeware WinBUGS package are presented for the anaesthetic agent example. © The Author(s) 2013.

  3. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  4. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id

    2014-09-30

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less

  5. Status of the Monte Carlo library least-squares (MCLLS) approach for non-linear radiation analyzer problems

    NASA Astrophysics Data System (ADS)

    Gardner, Robin P.; Xu, Libai

    2009-10-01

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.

  6. GIS coupled Multiple Criteria based Decision Support for Classification of Urban Coastal Areas in India

    NASA Astrophysics Data System (ADS)

    Dhiman, R.; Kalbar, P.; Inamdar, A. B.

    2017-12-01

    Coastal area classification in India is a challenge for federal and state government agencies due to fragile institutional framework, unclear directions in implementation of costal regulations and violations happening at private and government level. This work is an attempt to improvise the objectivity of existing classification methods to synergies the ecological systems and socioeconomic development in coastal cities. We developed a Geographic information system coupled Multi-criteria Decision Making (GIS-MCDM) approach to classify urban coastal areas where utility functions are used to transform the costal features into quantitative membership values after assessing the sensitivity of urban coastal ecosystem. Furthermore, these membership values for costal features are applied in different weighting schemes to derive Coastal Area Index (CAI) which classifies the coastal areas in four distinct categories viz. 1) No Development Zone, 2) Highly Sensitive Zone, 3) Moderately Sensitive Zone and 4) Low Sensitive Zone based on the sensitivity of urban coastal ecosystem. Mumbai, a coastal megacity in India is used as case study for demonstration of proposed method. Finally, uncertainty analysis using Monte Carlo approach to validate the sensitivity of CAI under specific multiple scenarios is carried out. Results of CAI method shows the clear demarcation of coastal areas in GIS environment based on the ecological sensitivity. CAI provides better decision support for federal and state level agencies to classify urban coastal areas according to the regional requirement of coastal resources considering resilience and sustainable development. CAI method will strengthen the existing institutional framework for decision making in classification of urban coastal areas where most effective coastal management options can be proposed.

  7. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piao, J; PLA 302 Hospital, Beijing; Xu, S

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less

  8. Cost-effectiveness analysis of left atrial appendage occlusion compared with pharmacological strategies for stroke prevention in atrial fibrillation.

    PubMed

    Lee, Vivian Wing-Yan; Tsai, Ronald Bing-Ching; Chow, Ines Hang-Iao; Yan, Bryan Ping-Yen; Kaya, Mehmet Gungor; Park, Jai-Wun; Lam, Yat-Yin

    2016-08-31

    Transcatheter left atrial appendage occlusion (LAAO) is a promising therapy for stroke prophylaxis in non-valvular atrial fibrillation (NVAF) but its cost-effectiveness remains understudied. This study evaluated the cost-effectiveness of LAAO for stroke prophylaxis in NVAF. A Markov decision analytic model was used to compare the cost-effectiveness of LAAO with 7 pharmacological strategies: aspirin alone, clopidogrel plus aspirin, warfarin, dabigatran 110 mg, dabigatran 150 mg, apixaban, and rivaroxaban. Outcome measures included quality-adjusted life years (QALYs), lifetime costs and incremental cost-effectiveness ratios (ICERs). Base-case data were derived from ACTIVE, RE-LY, ARISTOTLE, ROCKET-AF, PROTECT-AF and PREVAIL trials. One-way sensitivity analysis varied by CHADS2 score, HAS-BLED score, time horizons, and LAAO costs; and probabilistic sensitivity analysis using 10,000 Monte Carlo simulations was conducted to assess parameter uncertainty. LAAO was considered cost-effective compared with aspirin, clopidogrel plus aspirin, and warfarin, with ICER of US$5,115, $2,447, and $6,298 per QALY gained, respectively. LAAO was dominant (i.e. less costly but more effective) compared to other strategies. Sensitivity analysis demonstrated favorable ICERs of LAAO against other strategies in varied CHADS2 score, HAS-BLED score, time horizons (5 to 15 years) and LAAO costs. LAAO was cost-effective in 86.24 % of 10,000 simulations using a threshold of US$50,000/QALY. Transcatheter LAAO is cost-effective for prevention of stroke in NVAF compared with 7 pharmacological strategies. The transcatheter left atrial appendage occlusion (LAAO) is considered cost-effective against the standard 7 oral pharmacological strategies including acetylsalicylic acid (ASA) alone, clopidogrel plus ASA, warfarin, dabigatran 110 mg, dabigatran 150 mg, apixaban, and rivaroxaban for stroke prophylaxis in non-valvular atrial fibrillation management.

  9. Cost-utility analysis of 10- and 13-valent pneumococcal conjugate vaccines: Protection at what price in the Thai context?

    PubMed Central

    Kulpeng, Wantanee; Leelahavarong, Pattara; Rattanavipapong, Waranya; Sornsrivichai, Vorasith; Baggett, Henry C.; Meeyai, Aronrag; Punpanich, Warunee; Teerawattananon, Yot

    2015-01-01

    Objective This study aims to evaluate the costs and outcomes of offering the 10-valent pneumococcal conjugate vaccine (PCV10) and 13-valent pneumococcal conjugate vaccine (PCV13) in Thailand compared to the current situation of no PCV vaccination. Methods Two vaccination schedules were considered: two-dose primary series plus a booster dose (2 + 1) and three-dose primary series plus a booster dose (3 + 1). A cost-utility analysis was conducted using a societal perspective. A Markov simulation model was used to estimate the relevant costs and health outcomes for a lifetime horizon. Costs were collected and values were calculated for the year 2010. The results were reported as incremental cost-effectiveness ratios (ICERs) in Thai Baht (THB) per quality adjusted life year (QALY) gained, with future costs and outcomes being discounted at 3% per annum. One-way sensitivity analysis and probabilistic sensitivity analysis using a Monte Carlo simulation were performed to assess parameter uncertainty. Results Under the base case-scenario of 2 + 1 dose schedule and a five-year protection, without indirect vaccine effects, the ICER for PCV10 and PCV13 were THB 1,368,072 and THB 1,490,305 per QALY gained, respectively. With indirect vaccine effects, the ICER of PCV10 was THB 519,399, and for PCV13 was THB 527,378. The model was sensitive to discount rate, the change in duration of vaccine protection and the incidence of pneumonia for all age groups. Conclusions At current prices, PCV10 and PCV13 are not cost-effective in Thailand. Inclusion of indirect vaccine effects substantially reduced the ICERs for both vaccines, but did not result in cost effectiveness. PMID:23588084

  10. Cost-effectiveness analysis of fecal microbiota transplantation for recurrent Clostridium difficile infection.

    PubMed

    Varier, Raghu U; Biltaji, Eman; Smith, Kenneth J; Roberts, Mark S; Kyle Jensen, M; LaFleur, Joanne; Nelson, Richard E

    2015-04-01

    Clostridium difficile infection (CDI) places a high burden on the US healthcare system. Recurrent CDI (RCDI) occurs frequently. Recently proposed guidelines from the American College of Gastroenterology (ACG) and the American Gastroenterology Association (AGA) include fecal microbiota transplantation (FMT) as a therapeutic option for RCDI. The purpose of this study was to estimate the cost-effectiveness of FMT compared with vancomycin for the treatment of RCDI in adults, specifically following guidelines proposed by the ACG and AGA. We constructed a decision-analytic computer simulation using inputs from the published literature to compare the standard approach using tapered vancomycin to FMT for RCDI from the third-party payer perspective. Our effectiveness measure was quality-adjusted life years (QALYs). Because simulated patients were followed for 90 days, discounting was not necessary. One-way and probabilistic sensitivity analyses were performed. Base-case analysis showed that FMT was less costly ($1,669 vs $3,788) and more effective (0.242 QALYs vs 0.235 QALYs) than vancomycin for RCDI. One-way sensitivity analyses showed that FMT was the dominant strategy (both less expensive and more effective) if cure rates for FMT and vancomycin were ≥70% and <91%, respectively, and if the cost of FMT was <$3,206. Probabilistic sensitivity analysis, varying all parameters simultaneously, showed that FMT was the dominant strategy over 10, 000 second-order Monte Carlo simulations. Our results suggest that FMT may be a cost-saving intervention in managing RCDI. Implementation of FMT for RCDI may help decrease the economic burden to the healthcare system.

  11. Emission rate modeling and risk assessment at an automobile plant from painting operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, A.; Shrivastava, A.; Kulkarni, A.

    Pollution from automobile plants from painting operations has been addressed in the Clean Act Amendments (1990). The estimation of pollutant emissions from automobile painting operation were done mostly by approximate procedures than by actual calculations. The purpose of this study was to develop a methodology for calculating the emissions of the pollutants from painting operation in an automobile plant. Five scenarios involving an automobile painting operation, located in Columbus (Ohio), were studied for pollutant emission and concomitant risk associated with that. In the study of risk, a sensitivity analysis was done using Crystal Ball{reg{underscore}sign} on the parameters involved in risk.more » This software uses the Monte Carlo principle. The most sensitive factor in the risk analysis was the ground level concentration of the pollutants. All scenarios studied met the safety goal (a risk value of 1 x 10{sup {minus}6}) with different confidence levels. The highest level of confidence in meeting the safety goal was displayed by Scenario 1 (Alpha Industries). The results from the scenarios suggest that risk is associated with the quantity of released toxic pollutants. The sensitivity analysis of the various parameter shows that average spray rate of paint is the most important parameter in the estimation of pollutants from the painting operations. The entire study is a complete module that can be used by the environmental pollution control agencies for estimation of pollution levels and estimation of associated risk. The study can be further extended to other operations in an automobile industry or to different industries.« less

  12. Improving the quality of pressure ulcer care with prevention: a cost-effectiveness analysis.

    PubMed

    Padula, William V; Mishra, Manish K; Makic, Mary Beth F; Sullivan, Patrick W

    2011-04-01

    In October 2008, Centers for Medicare and Medicaid Services discontinued reimbursement for hospital-acquired pressure ulcers (HAPUs), thus placing stress on hospitals to prevent incidence of this costly condition. To evaluate whether prevention methods are cost-effective compared with standard care in the management of HAPUs. A semi-Markov model simulated the admission of patients to an acute care hospital from the time of admission through 1 year using the societal perspective. The model simulated health states that could potentially lead to an HAPU through either the practice of "prevention" or "standard care." Univariate sensitivity analyses, threshold analyses, and Bayesian multivariate probabilistic sensitivity analysis using 10,000 Monte Carlo simulations were conducted. Cost per quality-adjusted life-years (QALYs) gained for the prevention of HAPUs. Prevention was cost saving and resulted in greater expected effectiveness compared with the standard care approach per hospitalization. The expected cost of prevention was $7276.35, and the expected effectiveness was 11.241 QALYs. The expected cost for standard care was $10,053.95, and the expected effectiveness was 9.342 QALYs. The multivariate probabilistic sensitivity analysis showed that prevention resulted in cost savings in 99.99% of the simulations. The threshold cost of prevention was $821.53 per day per person, whereas the cost of prevention was estimated to be $54.66 per day per person. This study suggests that it is more cost effective to pay for prevention of HAPUs compared with standard care. Continuous preventive care of HAPUs in acutely ill patients could potentially reduce incidence and prevalence, as well as lead to lower expenditures.

  13. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    NASA Astrophysics Data System (ADS)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  14. Early assessment of the likely cost-effectiveness of a new technology: A Markov model with probabilistic sensitivity analysis of computer-assisted total knee replacement.

    PubMed

    Dong, Hengjin; Buxton, Martin

    2006-01-01

    The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.

  15. The Advantages of Hybrid 4DEnVar in the Context of the Forecast Sensitivity to Initial Conditions

    NASA Astrophysics Data System (ADS)

    Song, Hyo-Jong; Shin, Seoleun; Ha, Ji-Hyun; Lim, Sujeong

    2017-11-01

    Hybrid four-dimensional ensemble variational data assimilation (hybrid 4DEnVar) is a prospective successor to three-dimensional variational data assimilation (3DVar) in operational weather prediction centers currently developing a new weather prediction model and those that do not operate adjoint models. In experiments using real observations, hybrid 4DEnVar improved Northern Hemisphere (NH; 20°N-90°N) 500 hPa geopotential height forecasts up to 5 days in a NH summer month compared to 3DVar, with statistical significance. This result is verified against ERA-Interim through a Monte Carlo test. By a regression analysis, the sensitivity of 5 day forecast is associated with the quality of the initial condition. The increased analysis skill for midtropospheric midlatitude temperature and subtropical moisture has the most apparent effect on forecast skill in the NH including a typhoon prediction case. Through attributing the analysis improvements by hybrid 4DEnVar separately to the ensemble background error covariance (BEC), its four-dimensional (4-D) extension, and climatological BEC, it is revealed that the ensemble BEC contributes to the subtropical moisture analysis, whereas the 4-D extension does to the midtropospheric midlatitude temperature. This result implies that hourly wind-mass correlation in 6 h analysis window is required to extract the potential of hybrid 4DEnVar for the midlatitude temperature analysis to the maximum. However, the temporal ensemble correlation, in hourly time scale, between moisture and another variable is invalid so that it could not work for improving the hybrid 4DEnVar analysis.

  16. Predicting the sensitivity of the beryllium/scintillator layer neutron detector using Monte Carlo and experimental response functions.

    PubMed

    Styron, J D; Cooper, G W; Ruiz, C L; Hahn, K D; Chandler, G A; Nelson, A J; Torres, J A; McWatters, B R; Carpenter, Ken; Bonura, M A

    2014-11-01

    A methodology for obtaining empirical curves relating absolute measured scintillation light output to beta energy deposited is presented. Output signals were measured from thin plastic scintillator using NIST traceable beta and gamma sources and MCNP5 was used to model the energy deposition from each source. Combining the experimental and calculated results gives the desired empirical relationships. To validate, the sensitivity of a beryllium/scintillator-layer neutron activation detector was predicted and then exposed to a known neutron fluence from a Deuterium-Deuterium fusion plasma (DD). The predicted and the measured sensitivity were in statistical agreement.

  17. The effects of SENSE on PROPELLER imaging.

    PubMed

    Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael

    2015-12-01

    To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.

  18. Computer simulations for bioequivalence trials: Selection of analyte in BCS class II and IV drugs with first-pass metabolism, two metabolic pathways and intestinal efflux transporter.

    PubMed

    Mangas-Sanjuan, Victor; Navarro-Fontestad, Carmen; García-Arieta, Alfredo; Trocóniz, Iñaki F; Bermejo, Marival

    2018-05-30

    A semi-physiological two compartment pharmacokinetic model with two active metabolites (primary (PM) and secondary metabolites (SM)) with saturable and non-saturable pre-systemic efflux transporter, intestinal and hepatic metabolism has been developed. The aim of this work is to explore in several scenarios which analyte (parent drug or any of the metabolites) is the most sensitive to changes in drug product performance (i.e. differences in in vivo dissolution) and to make recommendations based on the simulations outcome. A total of 128 scenarios (2 Biopharmaceutics Classification System (BCS) drug types, 2 levels of K M Pgp , in 4 metabolic scenarios at 2 dose levels in 4 quality levels of the drug product) were simulated for BCS class II and IV drugs. Monte Carlo simulations of all bioequivalence studies were performed in NONMEM 7.3. Results showed the parent drug (PD) was the most sensitive analyte for bioequivalence trials in all the studied scenarios. PM and SM revealed less or the same sensitivity to detect differences in pharmaceutical quality as the PD. Another relevant result is that mean point estimate of C max and AUC methodology from Monte Carlo simulations allows to select more accurately the most sensitive analyte compared to the criterion on the percentage of failed or successful BE studies, even for metabolites which frequently show greater variability than PD. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.

  20. Effects of glyphosate formulations on the population dynamics of two freshwater cladoceran species.

    PubMed

    Reno, U; Doyle, S R; Momo, F R; Regaldo, L; Gagneten, A M

    2018-02-05

    The general objective of this work is to experimentally assess the effects of acute glyphosate pollution on two freshwater cladoceran species (Daphnia magna and Ceriodaphnia dubia) and to use this information to predict the population dynamics and the potential for recovery of exposed organisms. Five to six concentrations of four formulations of glyphosate (4-Gly) (Eskoba ® , Panzer Gold ® , Roundup Ultramax ® and Sulfosato Touchdown ® ) were evaluated in both cladoceran species through acute tests and 15-day recovery tests in order to estimate the population dynamics of microcrustaceans. The endpoints of the recovery test were: survival, growth (number of molts), fecundity, and the intrinsic population growth rate (r). A matrix population model (MPM) was applied to r of the survivor individuals of the acute tests, followed by a Monte Carlo simulation study. Among the 4-Gly tested, Sulfosato Touchdown ® was the one that showed higher toxicity, and C. dubia was the most sensitive species. The Monte Carlo simulation study showed an average value of λ always <1 for D. magna, indicating that its populations would not be able to survive under natural environmental conditions after an acute Gly exposure between 0.25 and 35 a.e. mg L -1 . The average value of λ for C. dubia was also <1 after exposure to Roundup Ultramax ® : 1.30 and 1.20 for 1.21 and 2.5 mg a.e. L -1 ,respectively. The combined methodology-recovery tests and the later analysis through MPM with a Monte Carlo simulation study-is proposed to integrate key demographic parameters and predict the possible fate of microcrustacean populations after being exposed to acute 4-Gly contamination events.

  1. Monte Carlo source simulation technique for solution of interference reactions in INAA experiments: a preliminary report

    NASA Astrophysics Data System (ADS)

    Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.

    2004-04-01

    A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.

  2. Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  3. Development of a satellite SAR image spectra and altimeter wave height data assimilation system for ERS-1

    NASA Technical Reports Server (NTRS)

    Hasselmann, Klaus; Hasselmann, Susanne; Bauer, Eva; Bruening, Claus; Lehner, Susanne; Graber, Hans; Lionello, Piero

    1988-01-01

    The applicability of ERS-1 wind and wave data for wave models was studied using the WAM third generation wave model and SEASAT altimeter, scatterometer and SAR data. A series of global wave hindcasts is made for the surface stress and surface wind fields by assimilation of scatterometer data for the full 96-day SEASAT and also for two wind field analyses for shorter periods by assimilation with the higher resolution ECMWF T63 model and by subjective analysis methods. It is found that wave models respond very sensitively to inconsistencies in wind field analyses and therefore provide a valuable data validation tool. Comparisons between SEASAT SAR image spectra and theoretical SAR spectra derived from the hindcast wave spectra by Monte Carlo simulations yield good overall agreement for 32 cases representing a wide variety of wave conditions. It is concluded that SAR wave imaging is sufficiently well understood to apply SAR image spectra with confidence for wave studies if supported by realistic wave models and theoretical computations of the strongly nonlinear mapping of the wave spectrum into the SAR image spectrum. A closed nonlinear integral expression for this spectral mapping relation is derived which avoids the inherent statistical errors of Monte Carlo computations and may prove to be more efficient numerically.

  4. OEDIPE: a new graphical user interface for fast construction of numerical phantoms and MCNP calculations.

    PubMed

    Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S

    2007-01-01

    Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.

  5. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  6. A Monte Carlo Analysis of Weight Data from UF 6 Cylinder Feed and Withdrawal Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garner, James R; Whitaker, J Michael

    2015-01-01

    As the number of nuclear facilities handling uranium hexafluoride (UF 6) cylinders (e.g., UF 6 production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge Nationalmore » Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.« less

  7. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 2: Sensitivity tests and results

    PubMed Central

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational–Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by better honouring inversion structures in the background state. PMID:29618848

  8. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 2: Sensitivity Tests and Results

    NASA Technical Reports Server (NTRS)

    Norris, Peter M.; da Silva, Arlindo M.

    2016-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational-Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by better honouring inversion structures in the background state.

  9. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 2: Sensitivity tests and results.

    PubMed

    Norris, Peter M; da Silva, Arlindo M

    2016-07-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational-Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by better honouring inversion structures in the background state.

  10. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  11. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE depletion with TRITON (T5-DEPL/T6-DEPL),• CE sensitivity/uncertainty analysis with TSUNAMI-3D,• Simplified and efficient LWR lattice physics with Polaris,• Large scale detailed spent fuel characterization with ORIGAMI and ORIGAMI Automator,• Advanced fission source convergence acceleration capabilities with Sourcerer,• Nuclear data library generation with AMPX, and• Integrated user interface with Fulcrum.Enhanced capabilities include:• Accurate and efficient CE Monte Carlo methods for eigenvalue and fixed source calculations,• Improved MG resonance self-shielding methodologies and data,• Resonance self-shielding with modernized and efficient XSProc integrated into most sequences,• Accelerated calculations with TRITON/NEWT (generally 4x faster than SCALE 6.1),• Spent fuel characterization with 1470 new reactor-specific libraries for ORIGEN,• Modernization of ORIGEN (Chebyshev Rational Approximation Method [CRAM] solver, API for high-performance depletion, new keyword input format)• Extension of the maximum mixture number to values well beyond the previous limit of 2147 to ~2 billion,• Nuclear data formats enabling the use of more than 999 energy groups,• Updated standard composition library to provide more accurate use of natural abundances, andvi• Numerous other enhancements for improved usability and stability.« less

  12. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  13. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  14. Computer program uses Monte Carlo techniques for statistical system performance analysis

    NASA Technical Reports Server (NTRS)

    Wohl, D. P.

    1967-01-01

    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.

  15. Design considerations for a C-shaped PET system, dedicated to small animal brain imaging, using GATE Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Efthimiou, N.; Papadimitroulas, P.; Kostou, T.; Loudos, G.

    2015-09-01

    Commercial clinical and preclinical PET scanners rely on the full cylindrical geometry for whole body scans as well as for dedicated organs. In this study we propose the construction of a low cost dual-head C-shaped PET system dedicated for small animal brain imaging. Monte Carlo simulation studies were performed using GATE toolkit to evaluate the optimum design in terms of sensitivity, distortions in the FOV and spatial resolution. The PET model is based on SiPMs and BGO pixelated arrays. Four different configurations with C- angle 0°, 15°, 30° and 45° within the modules, were considered. Geometrical phantoms were used for the evaluation process. STIR software, extended by an efficient multi-threaded ray tracing technique, was used for the image reconstruction. The algorithm automatically adjusts the size of the FOV according to the shape of the detector's geometry. The results showed improvement in sensitivity of ∼15% in case of 45° C-angle compared to the 0° case. The spatial resolution was found 2 mm for 45° C-angle.

  16. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  17. First Monte Carlo analysis of fragmentation functions from single-inclusive e + e - annihilation

    DOE PAGES

    Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; ...

    2016-12-02

    Here, we perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data, and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  18. Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield

    NASA Astrophysics Data System (ADS)

    Cramer, S. N.; Roussin, R. W.

    1981-11-01

    A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.

  19. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  20. Illicit and pharmaceutical drug consumption estimated via wastewater analysis. Part B: placing back-calculations in a formal statistical framework.

    PubMed

    Jones, Hayley E; Hickman, Matthew; Kasprzyk-Hordern, Barbara; Welton, Nicky J; Baker, David R; Ades, A E

    2014-07-15

    Concentrations of metabolites of illicit drugs in sewage water can be measured with great accuracy and precision, thanks to the development of sensitive and robust analytical methods. Based on assumptions about factors including the excretion profile of the parent drug, routes of administration and the number of individuals using the wastewater system, the level of consumption of a drug can be estimated from such measured concentrations. When presenting results from these 'back-calculations', the multiple sources of uncertainty are often discussed, but are not usually explicitly taken into account in the estimation process. In this paper we demonstrate how these calculations can be placed in a more formal statistical framework by assuming a distribution for each parameter involved, based on a review of the evidence underpinning it. Using a Monte Carlo simulations approach, it is then straightforward to propagate uncertainty in each parameter through the back-calculations, producing a distribution for instead of a single estimate of daily or average consumption. This can be summarised for example by a median and credible interval. To demonstrate this approach, we estimate cocaine consumption in a large urban UK population, using measured concentrations of two of its metabolites, benzoylecgonine and norbenzoylecgonine. We also demonstrate a more sophisticated analysis, implemented within a Bayesian statistical framework using Markov chain Monte Carlo simulation. Our model allows the two metabolites to simultaneously inform estimates of daily cocaine consumption and explicitly allows for variability between days. After accounting for this variability, the resulting credible interval for average daily consumption is appropriately wider, representing additional uncertainty. We discuss possibilities for extensions to the model, and whether analysis of wastewater samples has potential to contribute to a prevalence model for illicit drug use. Copyright © 2014. Published by Elsevier B.V.

  1. Illicit and pharmaceutical drug consumption estimated via wastewater analysis. Part B: Placing back-calculations in a formal statistical framework

    PubMed Central

    Jones, Hayley E.; Hickman, Matthew; Kasprzyk-Hordern, Barbara; Welton, Nicky J.; Baker, David R.; Ades, A.E.

    2014-01-01

    Concentrations of metabolites of illicit drugs in sewage water can be measured with great accuracy and precision, thanks to the development of sensitive and robust analytical methods. Based on assumptions about factors including the excretion profile of the parent drug, routes of administration and the number of individuals using the wastewater system, the level of consumption of a drug can be estimated from such measured concentrations. When presenting results from these ‘back-calculations’, the multiple sources of uncertainty are often discussed, but are not usually explicitly taken into account in the estimation process. In this paper we demonstrate how these calculations can be placed in a more formal statistical framework by assuming a distribution for each parameter involved, based on a review of the evidence underpinning it. Using a Monte Carlo simulations approach, it is then straightforward to propagate uncertainty in each parameter through the back-calculations, producing a distribution for instead of a single estimate of daily or average consumption. This can be summarised for example by a median and credible interval. To demonstrate this approach, we estimate cocaine consumption in a large urban UK population, using measured concentrations of two of its metabolites, benzoylecgonine and norbenzoylecgonine. We also demonstrate a more sophisticated analysis, implemented within a Bayesian statistical framework using Markov chain Monte Carlo simulation. Our model allows the two metabolites to simultaneously inform estimates of daily cocaine consumption and explicitly allows for variability between days. After accounting for this variability, the resulting credible interval for average daily consumption is appropriately wider, representing additional uncertainty. We discuss possibilities for extensions to the model, and whether analysis of wastewater samples has potential to contribute to a prevalence model for illicit drug use. PMID:24636801

  2. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    NASA Astrophysics Data System (ADS)

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.

    2017-09-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  3. A Cost-Effectiveness Analysis of Clopidogrel for Patients with Non-ST-Segment Elevation Acute Coronary Syndrome in China.

    PubMed

    Cui, Ming; Tu, Chen Chen; Chen, Er Zhen; Wang, Xiao Li; Tan, Seng Chuen; Chen, Can

    2016-09-01

    There are a number of economic evaluation studies of clopidogrel for patients with non-ST-segment elevation acute coronary syndrome (NSTEACS) published from the perspective of multiple countries in recent years. However, relevant research is quite limited in China. We aimed to estimate the long-term cost effectiveness for up to 1-year treatment with clopidogrel plus acetylsalicylic acid (ASA) versus ASA alone for NSTEACS from the public payer perspective in China. This analysis used a Markov model to simulate a cohort of patients for quality-adjusted life years (QALYs) gained and incremental cost for lifetime horizon. Based on the primary event rates, adherence rate, and mortality derived from the CURE trial, hazard functions obtained from published literature were used to extrapolate the overall survival to lifetime horizon. Resource utilization, hospitalization, medication costs, and utility values were estimated from official reports, published literature, and analysis of the patient-level insurance data in China. To assess the impact of parameters' uncertainty on cost-effectiveness results, one-way sensitivity analyses were undertaken for key parameters, and probabilistic sensitivity analysis (PSA) was conducted using the Monte Carlo simulation. The therapy of clopidogrel plus ASA is a cost-effective option in comparison with ASA alone for the treatment of NSTEACS in China, leading to 0.0548 life years (LYs) and 0.0518 QALYs gained per patient. From the public payer perspective in China, clopidogrel plus ASA is associated with an incremental cost of 43,340 China Yuan (CNY) per QALY gained and 41,030 CNY per LY gained (discounting at 3.5% per year). PSA results demonstrated that 88% of simulations were lower than the cost-effectiveness threshold of 150,721 CYN per QALY gained. Based on the one-way sensitivity analysis, results are most sensitive to price of clopidogrel, but remain well below this threshold. This analysis suggests that treatment with clopidogrel plus ASA for up to 1 year for patients with NSTEACS is cost effective in the local context of China from a public payers' perspective. Sanofi China.

  4. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less

  5. Monte Carlo analysis for the determination of the conic constant of an aspheric micro lens based on a scanning white light interferometric measurement

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon A.; Davies, Angela

    2005-08-01

    Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.

  6. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    PubMed

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    NASA Astrophysics Data System (ADS)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  8. Spatial distribution variation and probabilistic risk assessment of exposure to chromium in ground water supplies; a case study in the east of Iran.

    PubMed

    Fallahzadeh, Reza Ali; Khosravi, Rasoul; Dehdashti, Bahare; Ghahramani, Esmail; Omidi, Fariborz; Adli, Abolfazl; Miri, Mohammad

    2018-05-01

    A high concentration of chromium (VI) in groundwater can threaten the health of consumers. In this study, the concentration of chromium (VI) in 18 drinking water wells in Birjand, Iran, s was investigated over a period of two yearsNon-carcinogenic risk assessment, sensitivity, and uncertainty analysis as well as the most important variables in determining the non-carcinogenic risk for three age groups including children, teens, and adults, were performed using the Monte Carlo simulations technique. The northern and southern regions of the study area had the highest and lowest chromium concentrations, respectively. The chromium concentrations in 16.66% of the samples in an area of 604.79 km2 were more than World Health Organization (WHO) guideline (0.05 mg/L). The Moran's index analysis showed that the distribution of contamination is a cluster. The Hazard Index (HI) values for the children and teens groups were 1.02 and 2.02, respectively, which was more than 1. A sensitivity analysis indicated that the most important factor in calculating the HQ was the concentration of chromium in the consumed water. HQ values higher than 1 represent a high risk for the children group, which should be controlled by removing the chromium concentration of the drinking water. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.

    PubMed

    Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon

    2013-04-15

    The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  11. Prediction of Breakthrough Curves for Conservative and Reactive Transport from the Structural Parameters of Highly Heterogeneous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott; Haslauer, Claus P.; Cirpka, Olaf A.

    2017-01-05

    The key points of this presentation were to approach the problem of linking breakthrough curve shape (RP-CTRW transition distribution) to structural parameters from a Monte Carlo approach and to use the Monte Carlo analysis to determine any empirical error

  12. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  13. Setting Priorities in Behavioral Interventions: An Application to Reducing Phishing Risk.

    PubMed

    Canfield, Casey Inez; Fischhoff, Baruch

    2018-04-01

    Phishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system-level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit. © 2017 Society for Risk Analysis.

  14. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  15. Low-order modelling of shallow water equations for sensitivity analysis using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Zokagoa, Jean-Marie; Soulaïmani, Azzeddine

    2012-06-01

    This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.

  16. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  17. Monte Carlo Methods in Materials Science Based on FLUKA and ROOT

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor

    2003-01-01

    A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.

  18. Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Alder, J.; van Griensven, A.; Meixner, T.

    2003-12-01

    Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.

  19. Top Quark Mass Calibration for Monte Carlo Event Generators.

    PubMed

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W

    2016-12-02

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator m_{t}^{MC}. Because of hadronization and parton-shower dynamics, relating m_{t}^{MC} to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e^{+}e^{-} 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, m_{t}^{MC} differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m_{t}^{MC}≃m_{t,1  GeV}^{MSR}.

  20. Numerical simulation studies for optical properties of biomaterials

    NASA Astrophysics Data System (ADS)

    Krasnikov, I.; Seteikin, A.

    2016-11-01

    Biophotonics involves understanding how light interacts with biological matter, from molecules and cells, to tissues and even whole organisms. Light can be used to probe biomolecular events, such as gene expression and protein-protein interaction, with impressively high sensitivity and specificity. The spatial and temporal distribution of biochemical constituents can also be visualized with light and, thus, the corresponding physiological dynamics in living cells, tissues, and organisms in real time. Computer-based Monte Carlo (MC) models of light transport in turbid media take a different approach. In this paper, the optical and structural properties of biomaterials discussed. We explain the numerical simulationmethod used for studying the optical properties of biomaterials. Applications of the Monte-Carlo method in photodynamic therapy, skin tissue optics, and bioimaging described.

  1. Study of the CP-violating effects with gg → Η → τ{sup +}τ{sup –} process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belyaev, N. L., E-mail: nbelyaev@cern.ch; Konoplich, R. V.

    Study of the gg → Η → τ{sup +}τ{sup –} process was performed at Monte Carlo level within the framework of searching for CP-violating effects. The sensitivity of chosen observables to CP-parity of the Higgs boson was demonstrated for hadronic 1-prong τ decays (τ{sup ±} → π{sup ±}, ρ{sup ±}). Monte Carlo samples for the gg → Η → τ{sup +}τ{sup -} process were generated including the parton hadronisation to final state particles. This generation was performed for the Standard Model Higgs boson, the pseudoscalar Higgs boson, the Z → τ{sup +}τ{sup –} background, and mixed CP-states of the Higgsmore » boson.« less

  2. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients - A lifetime analysis.

    PubMed

    Voigt, Jeffrey; Carpenter, Linda; Leuchter, Andrew

    2017-01-01

    Repetitive Transcranial Magnetic Stimulation (rTMS) commonly is used for the treatment of Major Depressive Disorder (MDD) after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients' lifetime. We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs) of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20-59) who had failed to benefit from one pharmacotherapy trial. Patients' life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs), Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%. Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes) in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients) to $11,140/0.43 (younger patients). One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year. rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD.

  3. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients – A lifetime analysis

    PubMed Central

    2017-01-01

    Objective Repetitive Transcranial Magnetic Stimulation (rTMS) commonly is used for the treatment of Major Depressive Disorder (MDD) after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients’ lifetime. Methods We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs) of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20–59) who had failed to benefit from one pharmacotherapy trial. Patients’ life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs), Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%. Results Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes) in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients) to $11,140/0.43 (younger patients). One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year. Conclusion rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD. PMID:29073256

  4. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    NASA Astrophysics Data System (ADS)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  5. Monte Carlo design of optimal wire mesh collimator for breast tumor imaging process

    NASA Astrophysics Data System (ADS)

    Saad, W. H. M.; Roslan, R. E.; Mahdi, M. A.; Choong, W.-S.; Saion, E.; Saripan, M. I.

    2011-08-01

    This paper presents the modeling of breast tumor imaging process using wire mesh collimator gamma camera. Previous studies showed that the wire mesh collimator has a potential to improve the sensitivity of the tumor detection. In this paper, we extend our research significantly, to find an optimal configuration of the wire mesh collimator specifically for semi-compressed breast tumor detection, by looking into four major factors: weight, sensitivity, spatial resolution and tumor contrast. The numbers of layers in the wire mesh collimator is varied to optimize the collimator design. The statistical variations of the results are studied by simulating multiple realizations for each experiment using different starting random numbers. All the simulation environments are modeled using Monte Carlo N-Particle Code (MCNP). The quality of the detection is measured directly by comparing the sensitivity, spatial resolution and tumor contrast of the images produced by the wire mesh collimator and benchmarked that with a standard multihole collimator. The proposed optimal configuration of the wire mesh collimator is optimized by selecting the number of layers in wire mesh collimator, where the tumor contrast shows a relatively comparable value to the multihole collimator, when it is tested with uniformly semi-compressed breast phantom. The wire mesh collimator showed higher number of sensitivity because of its loose arrangement while the spatial resolution of wire mesh collimator does not shows much different compared to the multihole collimator. With a relatively good tumor contrast and spatial resolution, and increased in sensitivity, a new proposed wire mesh collimator gives a significant improvement in the wire mesh collimator design for breast cancer imaging process. The proposed collimator configuration is reduced to 44.09% from the total multihole collimator weight.

  6. Monte Carlo studies of medium-size telescope designs for the Cherenkov Telescope Array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, M. D.; Jogler, T.; Dumm, J.

    In this paper, we present studies for optimizing the next generation of ground-based imaging atmospheric Cherenkov telescopes (IACTs). Results focus on mid-sized telescopes (MSTs) for CTA, detecting very high energy gamma rays in the energy range from a few hundred GeV to a few tens of TeV. We describe a novel, flexible detector Monte Carlo package, FAST (FAst Simulation for imaging air cherenkov Telescopes), that we use to simulate different array and telescope designs. The simulation is somewhat simplified to allow for efficient exploration over a large telescope design parameter space. We investigate a wide range of telescope performance parametersmore » including optical resolution, camera pixel size, and light collection area. In order to ensure a comparison of the arrays at their maximum sensitivity, we analyze the simulations with the most sensitive techniques used in the field, such as maximum likelihood template reconstruction and boosted decision trees for background rejection. Choosing telescope design parameters representative of the proposed Davies–Cotton (DC) and Schwarzchild–Couder (SC) MST designs, we compare the performance of the arrays by examining the gamma-ray angular resolution and differential point-source sensitivity. We further investigate the array performance under a wide range of conditions, determining the impact of the number of telescopes, telescope separation, night sky background, and geomagnetic field. We find a 30–40% improvement in the gamma-ray angular resolution at all energies when comparing arrays with an equal number of SC and DC telescopes, significantly enhancing point-source sensitivity in the MST energy range. Finally, we attribute the increase in point-source sensitivity to the improved optical point-spread function and smaller pixel size of the SC telescope design.« less

  7. Monte Carlo studies of medium-size telescope designs for the Cherenkov Telescope Array

    DOE PAGES

    Wood, M. D.; Jogler, T.; Dumm, J.; ...

    2015-06-07

    In this paper, we present studies for optimizing the next generation of ground-based imaging atmospheric Cherenkov telescopes (IACTs). Results focus on mid-sized telescopes (MSTs) for CTA, detecting very high energy gamma rays in the energy range from a few hundred GeV to a few tens of TeV. We describe a novel, flexible detector Monte Carlo package, FAST (FAst Simulation for imaging air cherenkov Telescopes), that we use to simulate different array and telescope designs. The simulation is somewhat simplified to allow for efficient exploration over a large telescope design parameter space. We investigate a wide range of telescope performance parametersmore » including optical resolution, camera pixel size, and light collection area. In order to ensure a comparison of the arrays at their maximum sensitivity, we analyze the simulations with the most sensitive techniques used in the field, such as maximum likelihood template reconstruction and boosted decision trees for background rejection. Choosing telescope design parameters representative of the proposed Davies–Cotton (DC) and Schwarzchild–Couder (SC) MST designs, we compare the performance of the arrays by examining the gamma-ray angular resolution and differential point-source sensitivity. We further investigate the array performance under a wide range of conditions, determining the impact of the number of telescopes, telescope separation, night sky background, and geomagnetic field. We find a 30–40% improvement in the gamma-ray angular resolution at all energies when comparing arrays with an equal number of SC and DC telescopes, significantly enhancing point-source sensitivity in the MST energy range. Finally, we attribute the increase in point-source sensitivity to the improved optical point-spread function and smaller pixel size of the SC telescope design.« less

  8. Lambda polarization feasibility study at BM@N

    NASA Astrophysics Data System (ADS)

    Suvarieva, Dilyna; Gudima, Konstantin; Zinchenko, Alexander

    2017-03-01

    Heavy strange objects (hyperons) could provide essential signatures of the excited and compressed baryonic matter. At NICA, it is planned to study hyperons both in the collider mode (MPD detector) and the fixed-target one (BM@N setup). Measurements of strange hyperons polarization could give additional information on the strong interaction mechanisms. In heavy-ion collisions, such measurements are even more valuable since the polarization is expected to be sensitive to characteristics of the QCD medium (vorticity, hydrodynamic helicity) and to QCD anomalous transport. In this analysis, the possibility to measure at BM@N the polarization of the lightest strange hyperon Λ is studied in Monte Carlo event samples produced with the DCM-QGSM generator. It is shown that the detector will allow to measure Λ polarization with a precision required to check the model predictions.

  9. The Rayleigh-Taylor instability in a self-gravitating two-layer viscous sphere

    NASA Astrophysics Data System (ADS)

    Mondal, Puskar; Korenaga, Jun

    2018-03-01

    The dispersion relation of the Rayleigh-Taylor instability in the spherical geometry is of profound importance in the context of the Earth's core formation. Here we present a complete derivation of this dispersion relation for a self-gravitating two-layer viscous sphere. Such relation is, however, obtained through the solution of a complex transcendental equation, and it is difficult to gain physical insights directly from the transcendental equation itself. We thus also derive an empirical formula to compute the growth rate, by combining the Monte Carlo sampling of the relevant model parameter space with linear regression. Our analysis indicates that the growth rate of Rayleigh-Taylor instability is most sensitive to the viscosity of inner layer in a physical setting that is most relevant to the core formation.

  10. Uncertainty characterization and quantification in air pollution models. Application to the CHIMERE model

    NASA Astrophysics Data System (ADS)

    Debry, Edouard; Mallet, Vivien; Garaud, Damien; Malherbe, Laure; Bessagnet, Bertrand; Rouïl, Laurence

    2010-05-01

    Prev'Air is the French operational system for air pollution forecasting. It is developed and maintained by INERIS with financial support from the French Ministry for Environment. On a daily basis it delivers forecasts up to three days ahead for ozone, nitrogene dioxide and particles over France and Europe. Maps of concentration peaks and daily averages are freely available to the general public. More accurate data can be provided to customers and modelers. Prev'Air forecasts are based on the Chemical Transport Model CHIMERE. French authorities rely more and more on this platform to alert the general public in case of high pollution events and to assess the efficiency of regulation measures when such events occur. For example the road speed limit may be reduced in given areas when the ozone level exceeds one regulatory threshold. These operational applications require INERIS to assess the quality of its forecasts and to sensitize end users about the confidence level. Indeed concentrations always remain an approximation of the true concentrations because of the high uncertainty on input data, such as meteorological fields and emissions, because of incomplete or inaccurate representation of physical processes, and because of efficiencies in numerical integration [1]. We would like to present in this communication the uncertainty analysis of the CHIMERE model led in the framework of an INERIS research project aiming, on the one hand, to assess the uncertainty of several deterministic models and, on the other hand, to propose relevant indicators describing air quality forecast and their uncertainty. There exist several methods to assess the uncertainty of one model. Under given assumptions the model may be differentiated into an adjoint model which directly provides the concentrations sensitivity to given parameters. But so far Monte Carlo methods seem to be the most widely and oftenly used [2,3] as they are relatively easy to implement. In this framework one probability density function (PDF) is associated with an input parameter, according to its assumed uncertainty. Then the combined PDFs are propagated into the model, by means of several simulations with randomly perturbed input parameters. One may then obtain an approximation of the PDF of modeled concentrations, provided the Monte Carlo process has reasonably converged. The uncertainty analysis with CHIMERE has been led with a Monte Carlo method on the French domain and on two periods : 13 days during January 2009, with a focus on particles, and 28 days during August 2009, with a focus on ozone. The results show that for the summer period and 500 simulations, the time and space averaged standard deviation for ozone is 16 µg/m3, to be compared with an averaged concentration of 89 µg/m3. It is noteworthy that the space averaged standard deviation for ozone is relatively constant over time (the standard deviation of the timeseries itself is 1.6 µg/m3). The space variation of the ozone standard deviation seems to indicate that emissions have a significant impact, followed by western boundary conditions. Monte Carlo simulations are then post-processed by both ensemble [4] and Bayesian [5] methods in order to assess the quality of the uncertainty estimation. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the Paris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Mallet, V., and B. Sportisse (2006), Uncertainty in a chemistry-transport model due to physical parameterizations and numerical approximations: An ensemble approach applied to ozone modeling, J. Geophys. Res., 111, D01302, doi:10.1029/2005JD006149. (5) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.

  11. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  12. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  13. Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Cameron Russell; Mckigney, Edward Allen

    The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.

  14. Monte Carlo Analysis as a Trajectory Design Driver for the Transiting Exoplanet Survey Satellite (TESS) Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  15. A new approach to harmonic elimination based on a real-time comparison method

    NASA Astrophysics Data System (ADS)

    Gourisetti, Sri Nikhil Gupta

    Undesired harmonics are responsible for noise in a transmission channel, power loss in power electronics and in motor control. Selective Harmonic Elimination (SHE) is a well-known method used to eliminate or suppress the unwanted harmonics between the fundamental and the carrier frequency harmonic/component. But SHE bears the disadvantage of its incapability to use in real-time applications. A novel reference-carrier comparative method has been developed which can be used to generate an SPWM signal to apply in real-time systems. A modified carrier signal is designed and tested for different carrier frequencies based on the generated SPWM FFT. The carrier signal may change for different fundamental to carrier ratio that leads to solving the equations each time. An analysis to find all possible solutions for a particular carrier frequency and fundamental amplitude is performed and found. This proves that there is no one global maxima instead several local maximas exists for a particular condition set that makes this method less sensitive. Additionally, an attempt to find a universal solution that is valid for any carrier signal with predefined fundamental amplitude is performed. A uniform distribution Monte-Carlo sensitivity analysis is performed to measure the window i.e., best and worst possible solutions. The simulations are performed using MATLAB and are justified with experimental results.

  16. [Uncertainty analysis of ecological risk assessment caused by heavy-metals deposition from MSWI emission].

    PubMed

    Liao, Zhi-Heng; Sun, Jia-Ren; Wu, Dui; Fan, Shao-Jia; Ren, Ming-Zhong; Lü, Jia-Yang

    2014-06-01

    The CALPUFF model was applied to simulate the ground-level atmospheric concentrations of Pb and Cd from municipal solid waste incineration (MSWI) plants, and the soil concentration model was used to estimate soil concentration increments after atmospheric deposition based on Monte Carlo simulation, then ecological risk assessment was conducted by the potential ecological risk index method. The results showed that the largest atmospheric concentrations of Pb and Cd were 5.59 x 109-3) microg x m(-3) and 5.57 x 10(-4) microg x m(-3), respectively, while the maxima of soil concentration incremental medium of Pb and Cd were 2.26 mg x kg(-1) and 0.21 mg x kg(-1), respectively; High risk areas were located next to the incinerators, Cd contributed the most to the ecological risk, and Pb was basically free of pollution risk; Higher ecological hazard level was predicted at the most polluted point in urban areas with a 55.30% probability, while in rural areas, the most polluted point was assessed to moderate ecological hazard level with a 72.92% probability. In addition, sensitivity analysis of calculation parameters in the soil concentration model was conducted, which showed the simulated results of urban and rural area were most sensitive to soil mix depth and dry deposition rate, respectively.

  17. Controlled pattern imputation for sensitivity analysis of longitudinal binary and ordinal outcomes with nonignorable dropout.

    PubMed

    Tang, Yongqiang

    2018-04-30

    The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Cancer risk of polycyclic aromatic hydrocarbons (PAHs) in the soils from Jiaozhou Bay wetland.

    PubMed

    Yang, Wei; Lang, Yinhai; Li, Guoliang

    2014-10-01

    To estimate the cancer risk exposed to the PAHs in Jiaozhou Bay wetland soils, a probabilistic health risk assessment was conducted based on Monte Carlo simulations. A sensitivity analysis was performed to determine the input variables that contribute most to the cancer risk assessment. Three age groups were selected to estimate the cancer risk via four exposure pathways (soil ingestion, food ingestion, dermal contact and inhalation). The results revealed that the 95th percentiles cancer risks for children, teens and adults were 9.11×10(-6), 1.04×10(-5) and 7.08×10(-5), respectively. The cancer risks for three age groups were at acceptable range (10(-6)-10(-4)), indicating no potential cancer risk. For different exposure pathways, food ingestion was the major exposure pathway. For 7 carcinogenic PAHs, the cancer risk caused by BaP was the highest. Sensitivity analysis demonstrated that the parameters of exposure duration (ED) and sum of converted 7 carcinogenic PAHs concentrations in soil based on BaPeq (CSsoil) contribute most to the total uncertainty. This study provides a comprehensive risk assessment on carcinogenic PAHs in Jiaozhou Bay wetland soils, and might be useful in providing potential strategies of cancer risk prevention and controlling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Panel positioning error and support mechanism for a 30-m THz radio telescope

    NASA Astrophysics Data System (ADS)

    Yang, De-Hua; Okoh, Daniel; Zhou, Guo-Hua; Li, Ai-Hua; Li, Guo-Ping; Cheng, Jing-Quan

    2011-06-01

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio. Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.

  20. A power analysis for multivariate tests of temporal trend in species composition.

    PubMed

    Irvine, Kathryn M; Dinger, Eric C; Sarr, Daniel

    2011-10-01

    Long-term monitoring programs emphasize power analysis as a tool to determine the sampling effort necessary to effectively document ecologically significant changes in ecosystems. Programs that monitor entire multispecies assemblages require a method for determining the power of multivariate statistical models to detect trend. We provide a method to simulate presence-absence species assemblage data that are consistent with increasing or decreasing directional change in species composition within multiple sites. This step is the foundation for using Monte Carlo methods to approximate the power of any multivariate method for detecting temporal trends. We focus on comparing the power of the Mantel test, permutational multivariate analysis of variance, and constrained analysis of principal coordinates. We find that the power of the various methods we investigate is sensitive to the number of species in the community, univariate species patterns, and the number of sites sampled over time. For increasing directional change scenarios, constrained analysis of principal coordinates was as or more powerful than permutational multivariate analysis of variance, the Mantel test was the least powerful. However, in our investigation of decreasing directional change, the Mantel test was typically as or more powerful than the other models.

  1. Simultaneous scanning of two mice in a small-animal PET scanner: a simulation-based assessment of the signal degradation

    NASA Astrophysics Data System (ADS)

    Reilhac, Anthonin; Boisson, Frédéric; Wimberley, Catriona; Parmar, Arvind; Zahra, David; Hamze, Hasar; Davis, Emma; Arthur, Andrew; Bouillot, Caroline; Charil, Arnaud; Grégoire, Marie-Claude

    2016-02-01

    In PET imaging, research groups have recently proposed different experimental set ups allowing multiple animals to be simultaneously imaged in a scanner in order to reduce the costs and increase the throughput. In those studies, the technical feasibility was demonstrated and the signal degradation caused by additional mice in the FOV characterized, however, the impact of the signal degradation on the outcome of a PET study has not yet been studied. Here we thoroughly investigated, using Monte Carlo simulated [18F]FDG and [11C]Raclopride PET studies, different experimental designs for whole-body and brain acquisitions of two mice and assessed the actual impact on the detection of biological variations as compared to a single-mouse setting. First, we extended the validation of the PET-SORTEO Monte Carlo simulation platform for the simultaneous simulation of two animals. Then, we designed [18F]FDG and [11C]Raclopride input mouse models for the simulation of realistic whole-body and brain PET studies. Simulated studies allowed us to accurately estimate the differences in detection between single- and dual-mode acquisition settings that are purely the result of having two animals in the FOV. Validation results showed that PET-SORTEO accurately reproduced the spatial resolution and noise degradations that were observed with actual dual phantom experiments. The simulated [18F]FDG whole-body study showed that the resolution loss due to the off-center positioning of the mice was the biggest contributing factor in signal degradation at the pixel level and a minimal inter-animal distance as well as the use of reconstruction methods with resolution modeling should be preferred. Dual mode acquisition did not have a major impact on ROI-based analysis except in situations where uptake values in organs from the same subject were compared. The simulated [11C]Raclopride study however showed that dual-mice imaging strongly reduced the sensitivity to variations when mice were positioned side-by-side while no sensitivity reduction was observed when they were facing each other. This is the first study showing the impact of different experimental designs for whole-body and brain acquisitions of two mice on the quality of the results using Monte Carlo simulated [18F]FDG and [11C]Raclopride PET studies.

  2. Quantum-Noise-Limited Sensitivity-Enhancement of a Passive Optical Cavity by a Fast-Light Medium

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Luckay, H. A.; Chang, Hongrok; Myneni, Krishna

    2016-01-01

    We demonstrate for a passive optical cavity containing an intracavity dispersive atomic medium, the increase in scale factor near the critical anomalous dispersion is not cancelled by mode broadening or attenuation, resulting in an overall increase in the predicted quantum-noiselimited sensitivity. Enhancements of over two orders of magnitude are measured in the scale factor, which translates to greater than an order-of-magnitude enhancement in the predicted quantumnoise- limited measurement precision, by temperature tuning a low-pressure vapor of noninteracting atoms in a low-finesse cavity close to the critical anomalous dispersion condition. The predicted enhancement in sensitivity is confirmed through Monte-Carlo numerical simulations.

  3. Quantum-Noise-Limited Sensitivity Enhancement of a Passive Optical Cavity by a Fast-Light Medium

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Luckay, H. A.; Chang, Hongrok; Myneni, Krishna

    2016-01-01

    We demonstrate for a passive optical cavity containing a dispersive atomic medium, the increase in scale factor near the critical anomalous dispersion is not cancelled by mode broadening or attenuation, resulting in an overall increase in the predicted quantum-noise-limited sensitivity. Enhancements of over two orders of magnitude are measured in the scale factor, which translates to greater than an order-of-magnitude enhancement in the predicted quantum-noise-limited measurement precision, by temperature tuning a low-pressure vapor of non-interacting atoms in a low-finesse cavity close to the critical anomalous dispersion condition. The predicted enhancement in sensitivity is confirmed through Monte-Carlo numerical simulations.

  4. Explanation of random experiment sheduling and its application to space station analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1970-01-01

    The capability of the McDonnell-Douglas Phase B space station concept to complete the Blue Book Experiment program is analyzed and the Random experiment program with Resource Impact (REPRI) which was used to generate the data is described. The results indicate that station manpower and electrical power are the two resources which will constrain the amount of the Blue Book program that the station can complete. The station experiment program and its resource requirements are sensitive to levels of manpower and electrical power 13.5 men and 11 kilowatts. Continuous artificial gravity experiments have much less impact on the experiment program than experiments using separate artificial gravity periods. Station storage volume presently allocated for the FPE's and their supplies (1600 cu ft) is more than adequate. The REPRI program uses the Monte Carlo technique to generate a set of feasible experiment schedules for a space station. The schedules are statistically analyzed to determine the impact of the station experiment program resource requirements on the station concept. Also, the sensitivity of the station concept to one or more resources is assessed.

  5. Simulation of the regional groundwater-flow system of the Menominee Indian Reservation, Wisconsin

    USGS Publications Warehouse

    Juckem, Paul F.; Dunning, Charles P.

    2015-01-01

    The likely extent of the Neopit wastewater plume was simulated by using the groundwater-flow model and Monte Carlo techniques to evaluate the sensitivity of predictive simulations to a range of model parameter values. Wastewater infiltrated from the currently operating lagoons flows predominantly south toward Tourtillotte Creek. Some of the infiltrated wastewater is simulated as having a low probability of flowing beneath Tourtillotte Creek to the nearby West Branch Wolf River. Results for the probable extent of the wastewater plume are considered to be qualitative because the method only considers advective flow and does not account for processes affecting contaminant transport in porous media. Therefore, results for the probable extent of the wastewater plume are sensitive to the number of particles used to represent flow from the lagoon and the resolution of a synthetic grid used for the analysis. Nonetheless, it is expected that the qualitative results may be of use for identifying potential downgradient areas of concern that can then be evaluated using the quantitative “area contributing recharge to wells” method or traditional contaminant-transport simulations.

  6. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  7. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  8. Identification of Thyroid Receptor Ant/Agonists in Water Sources Using Mass Balance Analysis and Monte Carlo Simulation

    PubMed Central

    Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia

    2013-01-01

    Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563

  9. ASSESSING THE IMPACT OF HUMAN PON1 POLYMORPHISMS: SENSITIVITY AND MONTE CARLO ANALYSES USING A PHYSIOLOGICALLY BASED PHARMACOKINETIC/ PHARMACODYNAMIC (PBPK/PD) MODEL FOR CHLORPYRIFOS. (R828608)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  10. An Investigation of the Raudenbush (1988) Test for Studying Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Harwell, Michael

    1997-01-01

    The meta-analytic method proposed by S. W. Raudenbush (1988) for studying variance heterogeneity was studied. Results of a Monte Carlo study indicate that the Type I error rate of the test is sensitive to even modestly platykurtic score distributions and to the ratio of study sample size to the number of studies. (SLD)

  11. A Mixture Rasch Model with a Covariate: A Simulation Study via Bayesian Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Dai, Yunyun

    2013-01-01

    Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…

  12. Nanoshells for photothermal therapy: a Monte-Carlo based numerical study of their design tolerance

    PubMed Central

    Grosges, Thomas; Barchiesi, Dominique; Kessentini, Sameh; Gréhan, Gérard; de la Chapelle, Marc Lamy

    2011-01-01

    The optimization of the coated metallic nanoparticles and nanoshells is a current challenge for biological applications, especially for cancer photothermal therapy, considering both the continuous improvement of their fabrication and the increasing requirement of efficiency. The efficiency of the coupling between illumination with such nanostructures for burning purposes depends unevenly on their geometrical parameters (radius, thickness of the shell) and material parameters (permittivities which depend on the illumination wavelength). Through a Monte-Carlo method, we propose a numerical study of such nanodevice, to evaluate tolerances (or uncertainty) on these parameters, given a threshold of efficiency, to facilitate the design of nanoparticles. The results could help to focus on the relevant parameters of the engineering process for which the absorbed energy is the most dependant. The Monte-Carlo method confirms that the best burning efficiency are obtained for hollow nanospheres and exhibit the sensitivity of the absorbed electromagnetic energy as a function of each parameter. The proposed method is general and could be applied in design and development of new embedded coated nanomaterials used in biomedicine applications. PMID:21698021

  13. How uncertain is the future of electric vehicle market: Results from Monte Carlo simulations using a nested logit model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong

    Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less

  14. Adjoint acceleration of Monte Carlo simulations using TORT/MCNP coupling approach: a case study on the shielding improvement for the cyclotron room of the Buddhist Tzu Chi General Hospital.

    PubMed

    Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H

    2005-01-01

    Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.

  15. How uncertain is the future of electric vehicle market: Results from Monte Carlo simulations using a nested logit model

    DOE PAGES

    Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong; ...

    2016-12-08

    Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less

  16. The cost-effectiveness of telestroke in the treatment of acute ischemic stroke

    PubMed Central

    Nelson, R.E.; Saltzman, G.M.; Skalabrin, E.J.; Demaerschalk, B.M.

    2011-01-01

    Objective: To conduct a cost-effectiveness analysis of telestroke—a 2-way, audiovisual technology that links stroke specialists to remote emergency department physicians and their stroke patients—compared to usual care (i.e., remote emergency departments without telestroke consultation or stroke experts). Methods: A decision-analytic model was developed for both 90-day and lifetime horizons. Model inputs were taken from published literature where available and supplemented with western states' telestroke experiences. Costs were gathered using a societal perspective and converted to 2008 US dollars. Quality-adjusted life-years (QALYs) gained were combined with costs to generate incremental cost-effectiveness ratios (ICERs). In the lifetime horizon model, both costs and QALYs were discounted at 3% annually. Both one-way sensitivity analyses and Monte Carlo simulations were performed. Results: In the base case analysis, compared to usual care, telestroke results in an ICER of $108,363/QALY in the 90-day horizon and $2,449/QALY in the lifetime horizon. For the 90-day and lifetime horizons, 37.5% and 99.7% of 10,000 Monte Carlo simulations yielded ICERs <$50,000/QALY, a ratio commonly considered acceptable in the United States. Conclusion: When a lifetime perspective is taken, telestroke appears cost-effective compared to usual care, since telestroke costs are upfront but benefits of improved stroke care are lifelong. If barriers to use such as low reimbursement rates and high equipment costs are reduced, telestroke has the potential to diminish the striking geographic disparities of acute stroke care in the United States. PMID:21917781

  17. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  18. Multiparametric MRI followed by targeted prostate biopsy for men with suspected prostate cancer: a clinical decision analysis

    PubMed Central

    Willis, Sarah R; Ahmed, Hashim U; Moore, Caroline M; Donaldson, Ian; Emberton, Mark; Miners, Alec H; van der Meulen, Jan

    2014-01-01

    Objective To compare the diagnostic outcomes of the current approach of transrectal ultrasound (TRUS)-guided biopsy in men with suspected prostate cancer to an alternative approach using multiparametric MRI (mpMRI), followed by MRI-targeted biopsy if positive. Design Clinical decision analysis was used to synthesise data from recently emerging evidence in a format that is relevant for clinical decision making. Population A hypothetical cohort of 1000 men with suspected prostate cancer. Interventions mpMRI and, if positive, MRI-targeted biopsy compared with TRUS-guided biopsy in all men. Outcome measures We report the number of men expected to undergo a biopsy as well as the numbers of correctly identified patients with or without prostate cancer. A probabilistic sensitivity analysis was carried out using Monte Carlo simulation to explore the impact of statistical uncertainty in the diagnostic parameters. Results In 1000 men, mpMRI followed by MRI-targeted biopsy ‘clinically dominates’ TRUS-guided biopsy as it results in fewer expected biopsies (600 vs 1000), more men being correctly identified as having clinically significant cancer (320 vs 250), and fewer men being falsely identified (20 vs 50). The mpMRI-based strategy dominated TRUS-guided biopsy in 86% of the simulations in the probabilistic sensitivity analysis. Conclusions Our analysis suggests that mpMRI followed by MRI-targeted biopsy is likely to result in fewer and better biopsies than TRUS-guided biopsy. Future research in prostate cancer should focus on providing precise estimates of key diagnostic parameters. PMID:24934207

  19. Potential clinical and economic outcomes of active beta-D-glucan surveillance with preemptive therapy for invasive candidiasis at intensive care units: a decision model analysis.

    PubMed

    Pang, Y-K; Ip, M; You, J H S

    2017-01-01

    Early initiation of antifungal treatment for invasive candidiasis is associated with change in mortality. Beta-D-glucan (BDG) is a fungal cell wall component and a serum diagnostic biomarker of fungal infection. Clinical findings suggested an association between reduced invasive candidiasis incidence in intensive care units (ICUs) and BDG-guided preemptive antifungal therapy. We evaluated the potential cost-effectiveness of active BDG surveillance with preemptive antifungal therapy in patients admitted to adult ICUs from the perspective of Hong Kong healthcare providers. A Markov model was designed to simulate the outcomes of active BDG surveillance with preemptive therapy (surveillance group) and no surveillance (standard care group). Candidiasis-associated outcome measures included mortality rate, quality-adjusted life year (QALY) loss, and direct medical cost. Model inputs were derived from the literature. Sensitivity analyses were conducted to evaluate the robustness of model results. In base-case analysis, the surveillance group was more costly (1387 USD versus 664 USD) (1 USD = 7.8 HKD), with lower candidiasis-associated mortality rate (0.653 versus 1.426 per 100 ICU admissions) and QALY loss (0.116 versus 0.254) than the standard care group. The incremental cost per QALY saved by the surveillance group was 5239 USD/QALY. One-way sensitivity analyses found base-case results to be robust to variations of all model inputs. In probabilistic sensitivity analysis, the surveillance group was cost-effective in 50 % and 100 % of 10,000 Monte Carlo simulations at willingness-to-pay (WTP) thresholds of 7200 USD/QALY and ≥27,800 USD/QALY, respectively. Active BDG surveillance with preemptive therapy appears to be highly cost-effective to reduce the candidiasis-associated mortality rate and save QALYs in the ICU setting.

  20. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donnelly, H.; Fullwood, R.; Glancy, J.

    This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less

  2. The worth of data in predicting aquitard continuity in hydrogeological design

    NASA Astrophysics Data System (ADS)

    James, Bruce R.; Freeze, R. Allan

    1993-07-01

    A Bayesian decision framework is developed for addressing questions of hydrogeological data worth associated with engineering design at sites in heterogeneous geological environments. The specific case investigated is one of remedial contaminant containment in an aquifer underlain by an aquitard of uncertain continuity. The framework is used to evaluate the worth of hard and soft data in investigating the aquitard's continuity. The analysis consists of four modules: (1) an aquitard realization generator based on indicator kriging, (2) a procedure for the Bayesian updating of the uncertainty with respect to aquitard windows, (3) a Monte Carlo simulation model for advective contaminant transport, and (4) an economic decision model. A sensitivity analysis for a generic design example involving a design decision between a no-action alternative and a containment alternative indicates that the data worth of a single borehole providing a hard point datum was more sensitive to economic parameters than to hydrogeological or geostatistical parameters. For this case, data worth is very sensitive to the projected cost of containment, the discount rate, and the estimated cost of failure. When it comes to hydrogeological parameters, such as the representative hydraulic conductivity of the aquitard or underlying aquifer, the sensitivity analysis indicates that it is more important to know whether the field value is above or below some threshold value than it is to know its actual numerical value. A good conceptual understanding of the site geology is important in estimating prior uncertainties. The framework was applied in a retrospective fashion to the design of a remediation program for soil contaminated by radioactive waste disposal at the Savannah River site in South Carolina. The cost-effectiveness of different patterns of boreholes was studied. A contour map is presented for the net expected value of sample information (EVSI) for a single borehole. The net EVSI of patterns of precise point measurements is also compared to that of an imprecise seismic survey.

  3. Design and performance evaluation of a new high energy parallel hole collimator for radioiodine planar imaging by gamma cameras: Monte Carlo simulation study.

    PubMed

    Moslemi, Vahid; Ashoor, Mansour

    2017-05-01

    In addition to the trade-off between resolution and sensitivity which is a common problem among all types of parallel hole collimators (PCs), obtained images by high energy PCs (HEPCs) suffer from hole-pattern artifact (HPA) due to further septa thickness. In this study, a new design on the collimator has been proposed to improve the trade-off between resolution and sensitivity and to eliminate the HPA. A novel PC, namely high energy extended PC (HEEPC), is proposed and is compared to HEPCs. In the new PC, trapezoidal denticles were added upon the septa in the detector side. The performance of the HEEPCs were evaluated and compared to that of HEPCs using a Monte Carlo-N-particle version5 (MCNP5) simulation. The point spread functions (PSF) of HEPCs and HEEPCs were obtained as well as the various parameters such as resolution, sensitivity, scattering, and penetration ratios, and the HPA of the collimators was assessed. Furthermore, a Picker phantom study was performed to examine the effects of the collimators on the quality of planar images. It was found that the HEEPC D with an identical resolution to that of HEPC C increased sensitivity by 34.7%, and it improved the trade-off between resolution and sensitivity as well as to eliminate the HPA. In the picker phantom study, the HEEPC D indicated the hot and cold lesions with the higher contrast, lower noise, and higher contrast to noise ratio (CNR). Since the HEEPCs modify the shaping of PSFs, they are able to improve the trade-off between the resolution and sensitivity; consequently, planar images can be achieved with higher contrast resolutions. Furthermore, because the HEEPC S reduce the HPA and produce images with a higher CNR, compared to HEPCs, the obtained images by HEEPCs have a higher quality, which can help physicians to provide better diagnosis.

  4. 92 Years of the Ising Model: A High Resolution Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.

    2018-04-01

    Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.

  5. HepSim: A repository with predictions for high-energy physics experiments

    DOE PAGES

    Chekanov, S. V.

    2015-02-03

    A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.

  6. Effect of lag time distribution on the lag phase of bacterial growth - a Monte Carlo analysis

    USDA-ARS?s Scientific Manuscript database

    The objective of this study is to use Monte Carlo simulation to evaluate the effect of lag time distribution of individual bacterial cells incubated under isothermal conditions on the development of lag phase. The growth of bacterial cells of the same initial concentration and mean lag phase durati...

  7. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T.

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  8. First-Order or Second-Order Kinetics? A Monte Carlo Answer

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2005-01-01

    Monte Carlo computational experiments reveal that the ability to discriminate between first- and second-order kinetics from least-squares analysis of time-dependent concentration data is better than implied in earlier discussions of the problem. The problem is rendered as simple as possible by assuming that the order must be either 1 or 2 and that…

  9. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  10. Top Quark Mass Calibration for Monte Carlo Event Generators

    DOE PAGES

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; ...

    2016-11-29

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mmore » $$MC\\atop{t}$$. Because of hadronization and parton-shower dynamics, relating m$$MC\\atop{t}$$ to a field theory mass is difficult. Here, we present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e +e −2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to PYTHIA 8.205, m$$MC\\atop{t}$$ differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m$$MC\\atop{t}$$ ≃ m$$MSR\\atop{t,1 GeV}$$.« less

  11. Monte Carlo Simulation of THz Radiation Detection in GaN MOSFET n+nn+ Channel with Uncentered Gate in n-region

    NASA Astrophysics Data System (ADS)

    Palermo, C.; Torres, J.; Varani, L.; Gružinskis, V.; Starikov, E.; Shiktorov, P.; Ašmontas, S.; Sužiedelis, A.

    2017-10-01

    Electron transport and drain current noise in the wurtzite GaN MOSFET have been studied by Monte Carlo particle simulation which simultaneously solves the Boltzmann transport and pseudo-2D Poisson equations. A proper design of GaN MOSFET n+nn+ channel with uncentered gate in n-region to reach the maximum detection sensitivity is proposed. It is shown that the main role in formation of longitudinal transport asymmetry and THz radiation detection is played by optical phonon emission process. It is found that the detection current at 300 K is maximal in frequency range from 0.5 to 7 THz. At higher frequenciea the detection current rapidly decreases due to the inertia of electron motion.

  12. Modelling and analysis of solar cell efficiency distributions

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  13. Integrated techno-economic and environmental analysis of butadiene production from biomass.

    PubMed

    Farzad, Somayeh; Mandegari, Mohsen Ali; Görgens, Johann F

    2017-09-01

    In this study, lignocellulose biorefineries annexed to a typical sugar mill were investigated to produce either ethanol (EtOH) or 1,3-butadiene (BD), utilizing bagasse and trash as feedstock. Aspen simulation of the scenarios were developed and evaluated in terms of economic and environmental performance. The minimum selling prices (MSPs) for bio-based BD and EtOH production were 2.9-3.3 and 1.26-1.38-fold higher than market prices, respectively. Based on the sensitivity analysis results, capital investment, Internal Rate of Return and extension of annual operating time had the greatest impact on the MSP. Monte Carlo simulation demonstrated that EtOH and BD productions could be profitable if the average of ten-year historical price increases by 1.05 and 1.9-fold, respectively. The fossil-based route was found inferior to bio-based pathway across all investigated environmental impact categories, due to burdens associated with oil extraction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Simulation and Analyses of Stage Separation Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, Kelly J.; Covell, Peter F.

    2005-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(registered Trademark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  15. Simulation and Analyses of Stage Separation of Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, K. J.; Covell, Peter F.

    2007-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(Registerd TradeMark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  16. Aerocapture Guidance and Performance at Mars for High-Mass Systems

    NASA Technical Reports Server (NTRS)

    Zumwalt, Carlie H.; Sostaric, Ronald r.; Westhelle, Carlos H.; Cianciolo, Alicia Dwyer

    2010-01-01

    The objective of this study is to understand the performance associated with using the aerocapture maneuver to slow high-mass systems from an Earth-approach trajectory into orbit around Mars. This work is done in conjunction with the Mars Entry Descent and Landing Systems Analysis (EDL-SA) task to explore candidate technologies necessary for development in order to land large-scale payloads on the surface of Mars. Among the technologies considered include hypersonic inflatable aerodynamic decelerators (HIADs) and rigid mid-lift to drag (L/D) aeroshells. Nominal aerocapture trajectories were developed for the mid-L/D aeroshell and two sizes of HIADs, and Monte Carlo analysis was completed to understand sensitivities to dispersions. Additionally, a study was completed in order to determine the size of the larger of the two HIADs which would maintain design constraints on peak heat rate and diameter. Results show that each of the three aeroshell designs studied is a viable option for landing high-mass payloads as none of the three exceed performance requirements.

  17. On the treatment of ill-conditioned cases in the Monte Carlo library least-squares approach for inverse radiation analyzers

    NASA Astrophysics Data System (ADS)

    Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.

    2012-05-01

    Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.

  18. Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping

    NASA Astrophysics Data System (ADS)

    Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2016-03-01

    One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.

  19. The steady-state and transient electron transport within bulk zinc-blende indium nitride: The impact of crystal temperature and doping concentration variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddiqua, Poppy; O'Leary, Stephen K., E-mail: stephen.oleary@ubc.ca

    2016-03-07

    Within the framework of a semi-classical three-valley Monte Carlo electron transport simulation approach, we analyze the steady-state and transient aspects of the electron transport within bulk zinc-blende indium nitride, with a focus on the response to variations in the crystal temperature and the doping concentration. We find that while the electron transport associated with zinc-blende InN is highly sensitive to the crystal temperature, it is not very sensitive to the doping concentration selection. The device consequences of these results are then explored.

  20. Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner

    NASA Astrophysics Data System (ADS)

    Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.

    2007-02-01

    In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.

  1. A Monte Carlo risk assessment model for acrylamide formation in French fries.

    PubMed

    Cummins, Enda; Butler, Francis; Gormley, Ronan; Brunton, Nigel

    2009-10-01

    The objective of this study is to estimate the likely human exposure to the group 2a carcinogen, acrylamide, from French fries by Irish consumers by developing a quantitative risk assessment model using Monte Carlo simulation techniques. Various stages in the French-fry-making process were modeled from initial potato harvest, storage, and processing procedures. The model was developed in Microsoft Excel with the @Risk add-on package. The model was run for 10,000 iterations using Latin hypercube sampling. The simulated mean acrylamide level in French fries was calculated to be 317 microg/kg. It was found that females are exposed to smaller levels of acrylamide than males (mean exposure of 0.20 microg/kg bw/day and 0.27 microg/kg bw/day, respectively). Although the carcinogenic potency of acrylamide is not well known, the simulated probability of exceeding the average chronic human dietary intake of 1 microg/kg bw/day (as suggested by WHO) was 0.054 and 0.029 for males and females, respectively. A sensitivity analysis highlighted the importance of the selection of appropriate cultivars with known low reducing sugar levels for French fry production. Strict control of cooking conditions (correlation coefficient of 0.42 and 0.35 for frying time and temperature, respectively) and blanching procedures (correlation coefficient -0.25) were also found to be important in ensuring minimal acrylamide formation.

  2. Cost-effectiveness analysis of pneumococcal conjugate vaccines in preventing pneumonia in Peruvian children.

    PubMed

    Mezones-Holguín, Edward; Bolaños-Díaz, Rafael; Fiestas, Víctor; Sanabria, César; Gutiérrez-Aguado, Alfonso; Fiestas, Fabián; Suárez, Víctor J; Rodriguez-Morales, Alfonso J; Hernández, Adrián V

    2014-12-15

    Pneumococcal pneumonia (PP) has a high burden of morbimortality in children. Use of pneumococcal conjugate vaccines (PCVs) is an effective preventive measure. After PCV 7-valent (PCV7) withdrawal, PCV 10-valent (PCV10) and PCV 13-valent (PCV13) are the alternatives in Peru. This study aimed to evaluate cost effectiveness of these vaccines in preventing PP in Peruvian children <5 years-old. A cost-effectiveness analysis was developed in three phases: a systematic evidence search for calculating effectiveness; a cost analysis for vaccine strategies and outcome management; and an economic model based on decision tree analysis, including deterministic and probabilistic sensitivity analysis using acceptability curves, tornado diagram, and Monte Carlo simulation. A hypothetic 100 vaccinated children/vaccine cohort was built. An incremental cost-effectiveness ratio (ICER) was calculated. The isolation probability for all serotypes in each vaccine was estimated: 38% for PCV7, 41% PCV10, and 17% PCV13. Avoided hospitalization was found to be the best effectiveness model measure. Estimated costs for PCV7, PCV10, and PCV13 cohorts were USD13,761, 11,895, and 12,499, respectively. Costs per avoided hospitalization were USD718 for PCV7, USD333 for PCV10, and USD 162 for PCV13. At ICER, PCV7 was dominated by the other PCVs. Eliminating PCV7, PCV13 was more cost effective than PCV10 (confirmed in sensitivity analysis). PCV10 and PCV13 are more cost effective than PCV7 in prevention of pneumonia in children <5 years-old in Peru. PCV13 prevents more hospitalizations and is more cost-effective than PCV10. These results should be considered when making decisions about the Peruvian National Inmunizations Schedule.

  3. A study on the sensitivity of self-powered neutron detectors (SPNDs)

    NASA Astrophysics Data System (ADS)

    Lee, Wanno; Cho, Gyuseong; Kim, Kwanghyun; Kim, Hee Joon; choi, Yuseon; Park, Moon Chu; Kim, Soongpyung

    2001-08-01

    Self-powered neutron detectors (SPNDs) are widely used in reactors to monitor neutron flux, while they have several advantages such as small size, and relatively simple electronics required in conjunction with those usages, they have some intrinsic problems of the low level of output current-a slow response time and the rapid change of sensitivity-that make it difficult to use for a long term. Monte Carlo simulation was used to calculate the escape probability as a function of the birth position of emitted beta particle for geometry of rhodium-based SPNDs. A simple numerical method calculated the initial generation rate of beta particles and the change of generation rate due to rhodium burnup. Using results of the simulation and the simple numerical method, the burnup profile of rhodium number density and the neutron sensitivity were calculated as a function of burnup time in reactors. This method was verified by the comparison of this and other papers, and data of YGN3.4 (Young Gwang Nuclear plant 3, 4) about the initial sensitivity. In addition, for improvement of some properties of rhodium-based SPNDs, which are currently used, a modified geometry is proposed. The proposed geometry, which is tube-type, is able to increase the initial sensitivity due to increase of the escape probability. The escape probability was calculated by changing the thickness of the insulator and compared solid-type with tube-type about each insulator thickness. The method used here can be applied to the analysis and design of other types of SPNDs.

  4. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  5. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  6. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  7. First Estimate of the Exoplanet Population from Kepler Observations

    NASA Astrophysics Data System (ADS)

    Borucki, William J.; Koch, D. G.; Batalha, N.; Caldwell, D.; Dunham, E. W.; Gautier, T. N., III; Howell, S. B.; Jenkins, J. M.; Marcy, G. W.; Rowe, J.; Charbonneau, D.; Ciardi, D.; Ford, E. B.; Christiansen, J. L.; Kolodziejczak, J.; Prsa, A.

    2011-05-01

    William J. Borucki, David G. Koch, Natalie Batalha, Derek Buzasi , Doug Caldwell, David Charbonneau, Jessie L. Christiansen, David R. Ciardi, Edward Dunham, Eric B. Ford, Steve Thomas N. Gautier III, Steve Howell, Jon M. Jenkins, Jeffery Kolodziejczak, Geoffrey W. Marcy, Jason Rowe, and Andrej Prsa A model was developed to provide a first estimate of the intrinsic frequency of planetary candidates based on the number of detected planetary candidates and the measured noise for each of the 156,000 observed stars. The estimated distributions for the exoplanet frequency are presented with respect to the semi-major axis and the stellar effective temperature and represent values appropriate only to short-period candidates. Improved estimates are expected after a Monte Carlo study of the sensitivity of the data analysis pipeline to transit signals injected at the pixel level is completed.

  8. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less

  9. Conserved directed percolation: exact quasistationary distribution of small systems and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    César Mansur Filho, Júlio; Dickman, Ronald

    2011-05-01

    We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\

  10. Cost-utility analysis of an advanced pressure ulcer management protocol followed by trained wound, ostomy, and continence nurses.

    PubMed

    Kaitani, Toshiko; Nakagami, Gojiro; Iizaka, Shinji; Fukuda, Takashi; Oe, Makoto; Igarashi, Ataru; Mori, Taketoshi; Takemura, Yukie; Mizokami, Yuko; Sugama, Junko; Sanada, Hiromi

    2015-01-01

    The high prevalence of severe pressure ulcers (PUs) is an important issue that requires to be highlighted in Japan. In a previous study, we devised an advanced PU management protocol to enable early detection of and intervention for deep tissue injury and critical colonization. This protocol was effective for preventing more severe PUs. The present study aimed to compare the cost-effectiveness of the care provided using an advanced PU management protocol, from a medical provider's perspective, implemented by trained wound, ostomy, and continence nurses (WOCNs), with that of conventional care provided by a control group of WOCNs. A Markov model was constructed for a 1-year time horizon to determine the incremental cost-effectiveness ratio of advanced PU management compared with conventional care. The number of quality-adjusted life-years gained, and the cost in Japanese yen (¥) ($US1 = ¥120; 2015) was used as the outcome. Model inputs for clinical probabilities and related costs were based on our previous clinical trial results. Univariate sensitivity analyses were performed. Furthermore, a Bayesian multivariate probability sensitivity analysis was performed using Monte Carlo simulations with advanced PU management. Two different models were created for initial cohort distribution. For both models, the expected effectiveness for the intervention group using advanced PU management techniques was high, with a low expected cost value. The sensitivity analyses suggested that the results were robust. Intervention by WOCNs using advanced PU management techniques was more effective and cost-effective than conventional care. © 2015 by the Wound Healing Society.

  11. Cost-effectiveness analysis of repeat fine-needle aspiration for thyroid biopsies read as atypia of undetermined significance.

    PubMed

    Heller, Michael; Zanocco, Kyle; Zydowicz, Sara; Elaraj, Dina; Nayar, Ritu; Sturgeon, Cord

    2012-09-01

    The 2007 National Cancer Institute (NCI) conference on Thyroid Fine-Needle Aspiration (FNA) introduced the category atypia of undetermined significance (AUS) or follicular lesion of undetermined significance (FLUS). Repeat FNA in 3 to 6 months was recommended for low-risk patients. Compliance with these recommendations has been suboptimal. We hypothesized that repeat FNA would be more effective than diagnostic lobectomy, with decreased costs and improved rates of cancer detection. Cost-effectiveness analysis was performed in which we compared diagnostic lobectomy with repeat FNA. A Markov model was developed. Outcomes and probabilities were identified from literature review. Third-party payer costs were estimated in 2010 US dollars. Outcomes were weighted by use of the quality-of-life utility factors, yielding quality-adjusted life years (QALYs). Monte Carlo simulation and sensitivity analysis were used to examine the uncertainty of probability, cost, and utility estimates. The diagnostic lobectomy strategy cost $8,057 and produced 23.99 QALYs. Repeat FNA cost $2,462 and produced 24.05 QALYs. Repeat FNA was dominant until the cost of FNA increased to $6,091. Dominance of the repeat FNA strategy was not sensitive to the cost of operation or the complication rate. The NCI recommendations for repeat FNA regarding follow-up of AUS/FLUS results are cost-effective. Improving compliance with these guidelines should lead to less overall costs, greater quality of life, and fewer unnecessary operations. Copyright © 2012 Mosby, Inc. All rights reserved.

  12. Cost-effectiveness analysis of the Xpert MTB/RIF assay for rapid diagnosis of suspected tuberculosis in an intermediate burden area.

    PubMed

    You, Joyce H S; Lui, Grace; Kam, Kai Man; Lee, Nelson L S

    2015-04-01

    We examined, from a Hong Kong healthcare providers' perspective, the cost-effectiveness of rapid diagnosis with Xpert in patients hospitalized for suspected active pulmonary tuberculosis (PTB). A decision tree was designed to simulate outcomes of three diagnostic assessment strategies in adult patients hospitalized for suspected active PTB: conventional approach, sputum smear plus Xpert for acid-fast bacilli (AFB) smear-negative, and a single sputum Xpert test. Model inputs were derived from the literature. Outcome measures were direct medical cost, one-year mortality rate, quality-adjusted life-years (QALYs) and incremental cost per QALY (ICER). In the base-case analysis, Xpert was more effective with higher QALYs gained and a lower mortality rate when compared with smear plus Xpert by an ICER of USD99. A conventional diagnostic approach was the least preferred option with the highest cost, lowest QALYs gained and highest mortality rate. Sensitivity analysis showed that Xpert would be the most cost-effective option if the sensitivity of sputum AFB smear microscopy was ≤74%. The probabilities of Xpert, smear plus Xpert and a conventional approach to be cost-effective were 94.5%, 5.5% and 0%, respectively, in 10,000 Monte Carlo simulations. The Xpert sputum test appears to be a highly cost-effective diagnostic strategy for patients with suspected active PTB in an intermediate burden area like Hong Kong. Copyright © 2015 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  13. Spatial analysis and health risk assessment of heavy metals concentration in drinking water resources.

    PubMed

    Fallahzadeh, Reza Ali; Ghaneian, Mohammad Taghi; Miri, Mohammad; Dashti, Mohamad Mehdi

    2017-11-01

    The heavy metals available in drinking water can be considered as a threat to human health. Oncogenic risk of such metals is proven in several studies. Present study aimed to investigate concentration of the heavy metals including As, Cd, Cr, Cu, Fe, Hg, Mn, Ni, Pb, and Zn in 39 water supply wells and 5 water reservoirs within the cities Ardakan, Meibod, Abarkouh, Bafgh, and Bahabad. The spatial distribution of the concentration was carried out by the software ArcGIS. Such simulations as non-carcinogenic hazard and lifetime cancer risk were conducted for lead and nickel using Monte Carlo technique. The sensitivity analysis was carried out to find the most important and effective parameters on risk assessment. The results indicated that concentration of all metals in 39 wells (except iron in 3 cases) reached the levels mentioned in EPA, World Health Organization, and Pollution Control Department standards. Based on the spatial distribution results at all studied regions, the highest concentrations of metals were derived, respectively, for iron and zinc. Calculated HQ values for non-carcinogenic hazard indicated a reasonable risk. Average lifetime cancer risks for the lead in Ardakan and nickel in Meibod and Bahabad were shown to be 1.09 × 10 -3 , 1.67 × 10 -1 , and 2 × 10 -1 , respectively, demonstrating high carcinogenic risk compared to similar standards and studies. The sensitivity analysis suggests high impact of concentration and BW in carcinogenic risk.

  14. The Impact of Misspecifying the Within-Subject Covariance Structure in Multiwave Longitudinal Multilevel Models: A Monte Carlo Study

    ERIC Educational Resources Information Center

    Kwok, Oi-man; West, Stephen G.; Green, Samuel B.

    2007-01-01

    This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…

  15. Direct Simulation Monte Carlo Calculations in Support of the Columbia Shuttle Orbiter Accident Investigation

    NASA Technical Reports Server (NTRS)

    Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.

    2003-01-01

    The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.

  16. Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.

    PubMed

    Berry, K J; Mielke, P W

    2000-12-01

    Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.

  17. Stochastic Analysis of Orbital Lifetimes of Spacecraft

    NASA Technical Reports Server (NTRS)

    Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David

    2008-01-01

    A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.

  18. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  19. Monte Carlo investigation of transient acoustic fields in partially or completely bounded medium. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thanedar, B. D.

    1972-01-01

    A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.

  20. Electrosorption of a modified electrode in the vicinity of phase transition: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Gavilán Arriazu, E. M.; Pinto, O. A.

    2018-03-01

    We present a Monte Carlo study for the electrosorption of an electroactive species on a modified electrode. The surface of the electrode is modified by the irreversible adsorption of a non-electroactive species which is able to block a percentage of the adsorption sites. This generates an electrode with variable connectivity sites. A second species, electroactive in this case, is adsorbed in surface vacancies and can interact repulsively with itself. In particular, we are interested in the analysis of the effect of the non-electroactive species near of critical regime, where the c(2 × 2) structure is formed. Lattice-gas models and Monte Carlo simulations in the Gran Canonical Ensemble are used. The analysis conducted is based on the study of voltammograms, order parameters, isotherms, configurational entropy per site, at several values of energies and coverage degrees of the non-electroactive species.

  1. Acoustic localization of triggered lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, Rene O.; Johnson, Jeffrey B.; Edens, Harald E.; Thomas, Ronald J.; Rison, William

    2011-05-01

    We use acoustic (3.3-500 Hz) arrays to locate local (<20 km) thunder produced by triggered lightning in the Magdalena Mountains of central New Mexico. The locations of the thunder sources are determined by the array back azimuth and the elapsed time since discharge of the lightning flash. We compare the acoustic source locations with those obtained by the Lightning Mapping Array (LMA) from Langmuir Laboratory, which is capable of accurately locating the lightning channels. To estimate the location accuracy of the acoustic array we performed Monte Carlo simulations and measured the distance (nearest neighbors) between acoustic and LMA sources. For close sources (<5 km) the mean nearest-neighbors distance was 185 m compared to 100 m predicted by the Monte Carlo analysis. For far distances (>6 km) the error increases to 800 m for the nearest neighbors and 650 m for the Monte Carlo analysis. This work shows that thunder sources can be accurately located using acoustic signals.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudhyadhom, A; McGuinness, C; Descovich, M

    Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less

  3. Cost analysis of open radical cystectomy versus robot-assisted radical cystectomy.

    PubMed

    Bansal, Sukhchain S; Dogra, Tara; Smith, Peter W; Amran, Maisarah; Auluck, Ishna; Bhambra, Maninder; Sura, Manraj S; Rowe, Edward; Koupparis, Anthony

    2018-03-01

    To perform a cost analysis comparing the cost of robot-assisted radical cystectomy (RARC) with open RC (ORC) in a UK tertiary referral centre and to identify the key cost drivers. Data on hospital length of stay (LOS), operative time (OT), transfusion rate, and volume and complication rate were obtained from a prospectively updated institutional database for patients undergoing RARC or ORC. A cost decision tree model was created. Sensitivity analysis was performed to find key drivers of overall cost and to find breakeven points with ORC. Monte Carlo analysis was performed to quantify the variability in the dataset. One RARC procedure costs £12 449.87, or £12 106.12 if the robot was donated via charitable funds. In comparison, one ORC procedure costs £10 474.54. RARC is 18.9% more expensive than ORC. The key cost drivers were OT, LOS, and the number of cases performed per annum. High ongoing equipment costs remain a large barrier to the cost of RARC falling. However, minimal improvements in patient quality of life would be required to offset this difference. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  4. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGES

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less

  5. The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.

    PubMed

    Temleitner, László; Pusztai, László; Schweika, Werner

    2007-08-22

    The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.

  6. Monte Carlo simulation of near-infrared light propagation in realistic adult head models with hair follicles

    NASA Astrophysics Data System (ADS)

    Pan, Boan; Fang, Xiang; Liu, Weichao; Li, Nanxi; Zhao, Ke; Li, Ting

    2018-02-01

    Near infrared spectroscopy (NIRS) and diffuse correlation spectroscopy (DCS) has been used to measure brain activation, which are clinically important. Monte Carlo simulation has been applied to the near infrared light propagation model in biological tissue, and has the function of predicting diffusion and brain activation. However, previous studies have rarely considered hair and hair follicles as a contributing factor. Here, we attempt to use MCVM (Monte Carlo simulation based on 3D voxelized media) to examine light transmission, absorption, fluence, spatial sensitivity distribution (SSD) and brain activation judgement in the presence or absence of the hair follicles. The data in this study is a series of high-resolution cryosectional color photograph of a standing Chinse male adult. We found that the number of photons transmitted under the scalp decreases dramatically and the photons exported to detector is also decreasing, as the density of hair follicles increases. If there is no hair follicle, the above data increase and has the maximum value. Meanwhile, the light distribution and brain activation have a stable change along with the change of hair follicles density. The findings indicated hair follicles make influence of NIRS in light distribution and brain activation judgement.

  7. Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.

    PubMed

    Ritchie, Nicholas W M

    2017-06-01

    Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.

  8. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  9. A cost-utility analysis of dabigatran, enoxaparin, and usual care for venous thromboprophylaxis after hip or knee replacement surgery in Thailand.

    PubMed

    Kotirum, Surachai; Chongmelaxme, Bunchai; Chaiyakunapruk, Nathorn

    2017-02-01

    To analyze the cost-utility of oral dabigatran etexilate, enoxaparin sodium injection, and no intervention for venous thromboembolism (VTE) prophylaxis after total hip or knee replacement (THR/TKR) surgery among Thai patients. A cost-utility analysis using a decision tree model was conducted using societal and healthcare payers' perspectives to simulate relevant costs and health outcomes covering a 3-month time horizon. Costs were adjusted to year 2014. The willingness-to-pay threshold of THB 160,000 (USD 4926) was used. One-way sensitivity and probabilistic sensitivity analyses using a Monte Carlo simulation were performed. Compared with no VTE prophylaxis, dabigatran and enoxaparin after THR and TKR surgery incurred higher costs and increased quality adjusted life years (QALYs). However, their incremental cost-effectiveness ratios were high above the willingness to pay. Compared with enoxaparin, dabigatran for THR/TKR lowered VTE complications but increased bleeding cases; dabigatran was cost-saving by reducing the costs [by THB 3809.96 (USD 117.30) for THR] and producing more QALYs gained (by 0.00013 for THR). Dabigatran (vs. enoxaparin) had a 98 % likelihood of being cost effective. Dabigatran is cost-saving compared to enoxaparin for VTE prophylaxis after THR or TKR under the Thai context. However, both medications are not cost-effective compared to no thromboprophylaxis.

  10. An INCA model for pathogens in rivers and catchments: Model structure, sensitivity analysis and application to the River Thames catchment, UK.

    PubMed

    Whitehead, P G; Leckie, H; Rankinen, K; Butterfield, D; Futter, M N; Bussi, G

    2016-12-01

    Pathogens are an ongoing issue for catchment water management and quantifying their transport, loss and potential impacts at key locations, such as water abstractions for public supply and bathing sites, is an important aspect of catchment and coastal management. The Integrated Catchment Model (INCA) has been adapted to model the sources and sinks of pathogens and to capture the dominant dynamics and processes controlling pathogens in catchments. The model simulates the stores of pathogens in soils, sediments, rivers and groundwaters and can account for diffuse inputs of pathogens from agriculture, urban areas or atmospheric deposition. The model also allows for point source discharges from intensive livestock units or from sewage treatment works or any industrial input to river systems. Model equations are presented and the new pathogens model has been applied to the River Thames in order to assess total coliform (TC) responses under current and projected future land use. A Monte Carlo sensitivity analysis indicates that the input coliform estimates from agricultural sources and decay rates are the crucial parameters controlling pathogen behaviour. Whilst there are a number of uncertainties associated with the model that should be accounted for, INCA-Pathogens potentially provides a useful tool to inform policy decisions and manage pathogen loading in river systems. Copyright © 2016. Published by Elsevier B.V.

  11. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  12. Monte Carlo Simulations for the Detection of Buried Objects Using Single Sided Backscattered Radiation.

    PubMed

    Yip, Mary; Saripan, M Iqbal; Wells, Kevin; Bradley, David A

    2015-01-01

    Detection of buried improvised explosive devices (IEDs) is a delicate task, leading to a need to develop sensitive stand-off detection technology. The shape, composition and size of the IEDs can be expected to be revised over time in an effort to overcome increasingly sophisticated detection methods. As an example, for the most part, landmines are found through metal detection which has led to increasing use of non-ferrous materials such as wood or plastic containers for chemical based explosives being developed. Monte Carlo simulations have been undertaken considering three different commercially available detector materials (hyperpure-Ge (HPGe), lanthanum(III) bromide (LaBr) and thallium activated sodium iodide (NaI(Tl)), applied at a stand-off distance of 50 cm from the surface and burial depths of 0, 5 and 10 cm, with sand as the obfuscating medium. Target materials representing medium density wood and mild steel have been considered. Each detector has been modelled as a 10 cm thick cylinder with a 20 cm diameter. It appears that HPGe represents the most promising detector for this application. Although it was not the highest density material studied, its excellent energy resolving capability leads to the highest quality spectra from which detection decisions can be inferred. The simulation work undertaken here suggests that a vehicle-born threat detection system could be envisaged using a single betatron and a series of detectors operating in parallel observing the space directly in front of the vehicle path. Furthermore, results show that non-ferrous materials such as wood can be effectively discerned in such remote-operated detection system, with the potential to apply a signature analysis template matching technique for real-time analysis of such data.

  13. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    NASA Astrophysics Data System (ADS)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  14. Cost-effectiveness of a statewide falls prevention program in Pennsylvania: Healthy Steps for Older Adults.

    PubMed

    Albert, Steven M; Raviotta, Jonathan; Lin, Chyongchiou J; Edelstein, Offer; Smith, Kenneth J

    2016-10-01

    Pennsylvania's Department of Aging has offered a falls prevention program, "Healthy Steps for Older Adults" (HSOA), since 2005, with about 40,000 older adults screened for falls risk. In 2010 to 2011, older adults 50 years or older who completed HSOA (n = 814) had an 18% reduction in falls incidence compared with a comparison group that attended the same senior centers (n = 1019). We examined the effect of HSOA on hospitalization and emergency department (ED) treatment, and estimated the potential cost savings. Decision-tree analysis. The following were included in a decision-tree model based on a prior longitudinal cohort study: costs of the intervention, number of falls, frequency and costs of ED visits and hospitalizations, and self-reported quality of life of individuals in each outcome condition. A Monte Carlo probabilistic sensitivity analysis assigned appropriate distributions to all input parameters and evaluated model results over 500 iterations. The model included all ED and hospitalization episodes rather than just episodes linked to falls. Over 12 months of follow-up, 11.3% of the HSOA arm and 14.8% of the comparison group experienced 1 or more hospitalizations (P = .04). HSOA participants had less hospital care when matched for falls status. Observed values suggest expected costs per participant of $3013 in the HSOA arm and $3853 in the comparison condition, an average savings of $840 per person. Results were confirmed in Monte Carlo simulations ($3164 vs $3882, savings of $718). The savings of $718 to $840 per person is comparable to reports from other falls prevention economic evaluations. The advantages of HSOA include its statewide reach and integration with county aging services.

  15. Search for (W/Z → jets) + γ Events in Proton-Antiproton Collisions at the Fermilab Tevatron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bocci, Andrea

    We present a study of the p¯p → W(Z)γ → γq¯q process at the center-of-mass energy √s = 1.96 TeV using data collected by the Collider Detector at Fermilab. The analysis is based on the selection of low transverse momentum photons produced in association with at least two jets. A modification of an existing photon trigger was studied and implemented in the data acquisition system to enhance the sensitivity of this analysis. The data presented are from approximately 184 pb -1 of integrated luminosity collected by this new trigger. A preliminary event sample is obtained requiring a central photon withmore » E T > 12 GeV and two jets with E T > 15 GeV. The corresponding efficiency is studied using a Monte Carlo simulation of the W(Z)γ → γq¯q based on Standard Model predictions. Monte Carlo estimation of the background is not necessary as it is measured from the data. A more advanced selection based on a Neural Network method improves the signal-to-noise ratio from 1/333 to 1/71, and further optimization of the dijet mass search region increases the ratio to its final value of 1/41. No evidence of a W/Z → q¯q peak in the dijet mass distribution is visible when the background contribution is subtracted. Using a fully Bayesian approach, the 95% confidence level upper limit on σ(p¯p → Wγ) x Β(W → q¯q) + σ(p¯p → Zγ) x Β(Z → q¯q) is calculated to be 54 pb, which is consistent with the Standard Model prediction of 20.5 pb.« less

  16. Cost-effectiveness analysis of influenza and pneumococcal vaccination for Hong Kong elderly in long-term care facilities.

    PubMed

    You, J H S; Wong, W C W; Ip, M; Lee, N L S; Ho, S C

    2009-11-01

    To compare cost and quality-adjusted life-years (QALYs) gained by influenza vaccination with or without pneumococcal vaccination in the elderly living in long-term care facilities (LTCFs). Cost-effectiveness analysis based on Markov modelling over 5 years, from a Hong Kong public health provider's perspective, on a hypothetical cohort of LTCF residents aged > or = 65 years. Benefit-cost ratio (BCR) and net present value (NPV) of two vaccination strategies versus no vaccination were estimated. The cost and QALYs gained by two vaccination strategies were compared by Student's t-test in probabilistic sensitivity analysis (10,000 Monte Carlo simulations). Both vaccination strategies had high BCRs and NPVs (6.39 and US$334 for influenza vaccination; 5.10 and US$332 for influenza plus pneumococcal vaccination). In base case analysis, the two vaccination strategies were expected to cost less and gain higher QALYs than no vaccination. In probabilistic sensitivity analysis, the cost of combined vaccination and influenza vaccination was significantly lower (p<0.001) than the cost of no vaccination. Both vaccination strategies gained significantly higher (p<0.001) QALYs than no vaccination. The QALYs gained by combined vaccination were significantly higher (p = 0.030) than those gained by influenza vaccination alone. The total cost of combined vaccination was significantly lower (p = 0.011) than that of influenza vaccination. Influenza vaccination with or without pneumococcal vaccination appears to be less costly with higher QALYs gained than no vaccination, over a 5-year period, for elderly people living in LTCFs from the perspective of a Hong Kong public health organisation. Combined vaccination was more likely to gain higher QALYs with lower total cost than influenza vaccination alone.

  17. An uncertainty and sensitivity analysis approach for GIS-based multicriteria landslide susceptibility mapping.

    PubMed

    Feizizadeh, Bakhtiar; Blaschke, Thomas

    2014-03-04

    GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.

  18. A Multi-scale Approach for CO2 Accounting and Risk Analysis in CO2 Enhanced Oil Recovery Sites

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Viswanathan, H. S.; Middleton, R. S.; Pan, F.; Ampomah, W.; Yang, C.; Jia, W.; Lee, S. Y.; McPherson, B. J. O. L.; Grigg, R.; White, M. D.

    2015-12-01

    Using carbon dioxide in enhanced oil recovery (CO2-EOR) is a promising technology for emissions management because CO2-EOR can dramatically reduce carbon sequestration costs in the absence of greenhouse gas emissions policies that include incentives for carbon capture and storage. This study develops a multi-scale approach to perform CO2 accounting and risk analysis for understanding CO2 storage potential within an EOR environment at the Farnsworth Unit of the Anadarko Basin in northern Texas. A set of geostatistical-based Monte Carlo simulations of CO2-oil-water flow and transport in the Marrow formation are conducted for global sensitivity and statistical analysis of the major risk metrics: CO2 injection rate, CO2 first breakthrough time, CO2 production rate, cumulative net CO2 storage, cumulative oil and CH4 production, and water injection and production rates. A global sensitivity analysis indicates that reservoir permeability, porosity, and thickness are the major intrinsic reservoir parameters that control net CO2 injection/storage and oil/CH4 recovery rates. The well spacing (the distance between the injection and production wells) and the sequence of alternating CO2 and water injection are the major operational parameters for designing an effective five-spot CO2-EOR pattern. The response surface analysis shows that net CO2 injection rate increases with the increasing reservoir thickness, permeability, and porosity. The oil/CH4 production rates are positively correlated to reservoir permeability, porosity and thickness, but negatively correlated to the initial water saturation. The mean and confidence intervals are estimated for quantifying the uncertainty ranges of the risk metrics. The results from this study provide useful insights for understanding the CO2 storage potential and the corresponding risks of commercial-scale CO2-EOR fields.

  19. Development of a Feedstock-to-Product Chain Model for Densified Biomass Pellets

    NASA Astrophysics Data System (ADS)

    McPherrin, Daniel

    The Q’Pellet is a spherical, torrefied biomass pellet currently under development. It aims to improve on the shortcomings of commercially available cylindrical white and torrefied pellets. A spreadsheet-based model was developed to allow for techno-economic analysis and simplified life cycle analysis of Q’Pellets, torrefied pellets and white pellets. A case study was developed to compare the production of white, torrefied and Q’Pellet production based on their internal rates of return and life cycle greenhouse gas emissions. The case study was based on a commercial scale plant built in Williams Lake BC with product delivery in Rotterdam, Netherlands. Q’Pellets had the highest modelled internal rate of return, at 12.7%, with white pellets at 11.1% and torrefied pellets at 8.0%. The simplified life cycle analysis showed that Q’Pellets had the lowest life cycle greenhouse gas emissions of the three products, 6.96 kgCO2eq/GJ, compared to 21.50 kgCO2eq/GJ for white pellets and 10.08 kgCO2eq/GJ for torrefied pellets. At these levels of life cycle greenhouse gas emissions, white pellets are above the maximum life cycle emissions to be considered sustainable under EU regulations. Sensitivity analysis was performed on the model by modifying input variables, and showed that white pellets are more sensitive to uncontrollable market variables, especially pellet sale prices, raw biomass prices and transportation costs. Monte Carlo analysis was also performed, which showed that white pellet production is less predictable and more likely to lead to a negative internal rate of return compared to Q’Pellet production.

  20. E Pluribus Analysis: Applying a Superforecasting Methodology to the Detection of Homegrown Violence

    DTIC Science & Technology

    2018-03-01

    actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique, which...actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique...PREDICTING RANDOMNESS.............................................................24 1. Using a “ Runs Test” to Determine a Temporal Pattern in Lone

  1. A probabilistic approach to emissions from transportation sector in the coming decades

    NASA Astrophysics Data System (ADS)

    Yan, F.; Winijkul, E.; Bond, T. C.; Streets, D. G.

    2010-12-01

    Future emission estimates are necessary for understanding climate change, designing national and international strategies for air quality control and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so. Most current emission projection models are deterministic; in other words, there is only single answer for each scenario. As a result, uncertainties have not been included in the estimation of climate forcing or other environmental effects, but it is important to quantify the uncertainty inherent in emission projections. We explore uncertainties of emission projections from transportation sector in the coming decades by sensitivity analysis and Monte Carlo simulations. These projections are based on a technology driven model: the Speciated Pollutants Emission Wizard (SPEW)-Trend, which responds to socioeconomic conditions in different economic and mitigation scenarios. The model contains detail about technology stock, including consumption growth rates, retirement rates, timing of emission standards, deterioration rates and transition rates from normal vehicles to vehicles with extremely high emission factors (termed “superemitters”). However, understanding of these parameters, as well as relationships with socioeconomic conditions, is uncertain. We project emissions from transportation sectors under four different IPCC scenarios (A1B, A2, B1, and B2). Due to the later implementation of advanced emission standards, Africa has the highest annual growth rate (1.2-3.1%) from 2010 to 2050. Superemitters begin producing more than 50% of global emissions around year 2020. We estimate uncertainties from the relationships between technological change and socioeconomic conditions and examine their impact on future emissions. Sensitivities to parameters governing retirement rates are highest, causing changes in global emissions from-26% to +55% on average from 2010 to 2050. We perform Monte Carlo simulations to examine how these uncertainties will affect total emissions if any input parameter that has inherent the uncertainties is substituted by a range of values-probability distribution and varies at the same time; the 95% confidence interval of global emission annual growth rate is -1.9% to +0.2% per year.

  2. Estimated medical cost reductions for paliperidone palmitate vs placebo in a randomized, double-blind relapse-prevention trial of patients with schizoaffective disorder.

    PubMed

    Joshi, K; Lin, J; Lingohr-Smith, M; Fu, D J

    2015-01-01

    The objective of this economic model was to estimate the difference in medical costs among patients treated with paliperidone palmitate once-monthly injectable antipsychotic (PP1M) vs placebo, based on clinical event rates reported in the 15-month randomized, double-blind, placebo-controlled, parallel-group study of paliperidone palmitate evaluating time to relapse in subjects with schizoaffective disorder. Rates of psychotic, depressive, and/or manic relapses and serious and non-serious treatment-emergent adverse events (TEAEs) were obtained from the long-term paliperidone palmitate vs placebo relapse prevention study. The total annual medical cost for a relapse from a US payer perspective was obtained from published literature and the costs for serious and non-serious TEAEs were based on Common Procedure Terminology codes. Total annual medical cost differences for patients treated with PP1M vs placebo were then estimated. Additionally, one-way and Monte Carlo sensitivity analyses were conducted. Lower rates of relapse (-18.3%) and serious TEAEs (-3.9%) were associated with use of PP1M vs placebo as reported in the long-term paliperidone palmitate vs placebo relapse prevention study. As a result of the reduction in these clinical event rates, the total annual medical cost was reduced by $7140 per patient treated with PP1M vs placebo. One-way sensitivity analysis showed that variations in relapse rates had the greatest impact on the estimated medical cost differences (range: -$9786, -$4670). Of the 10,000 random cycles of Monte Carlo simulations, 100% showed a medical cost difference <$0 (reduction) for patients using PPIM vs placebo. The average total annual medical differences per patient were -$8321 for PP1M monotherapy and -$6031 for PPIM adjunctive therapy. Use of PP1M for treatment of patients with schizoaffective disorder was associated with a significantly lower rate of relapse and a reduction in medical costs compared to placebo. Further evaluation in the real-world setting is warranted.

  3. TH-A-19A-11: Validation of GPU-Based Monte Carlo Code (gPMC) Versus Fully Implemented Monte Carlo Code (TOPAS) for Proton Radiation Therapy: Clinical Cases Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D; Schuemann, J; Dowdell, S

    Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavitiesmore » and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.« less

  4. Cross-platform validation and analysis environment for particle physics

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  5. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  6. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  7. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  8. Comparative analysis of the secondary electron yield from carbon nanoparticles and pure water medium

    NASA Astrophysics Data System (ADS)

    Verkhovtsev, Alexey; McKinnon, Sally; de Vera, Pablo; Surdutovich, Eugene; Guatelli, Susanna; Korol, Andrei V.; Rosenfeld, Anatoly; Solov'yov, Andrey V.

    2015-04-01

    The production of secondary electrons generated by carbon nanoparticles and pure water medium irradiated by fast protons is studied by means of model approaches and Monte Carlo simulations. It is demonstrated that due to a prominent collective response to an external field, the nanoparticles embedded in the medium enhance the yield of low-energy electrons. The maximal enhancement is observed for electrons in the energy range where plasmons, which are excited in the nanoparticles, play the dominant role. Electron yield from a solid carbon nanoparticle composed of fullerite, a crystalline form of C60 fullerene, is demonstrated to be several times higher than that from liquid water. Decay of plasmon excitations in carbon-based nanosystems thus represents a mechanism of increase of the low-energy electron yield, similar to the case of sensitizing metal nanoparticles. This observation gives a hint for investigation of novel types of sensitizers to be composed of metallic and organic parts. Contribution to the Topical Issue "COST Action Nano-IBCT: Nano-scale Processes Behind Ion-Beam Cancer Therapy", edited by Andrey V. Solov'yov, Nigel Mason, Gustavo García and Eugene Surdutovich.

  9. Supplementary data of “Impacts of mesic and xeric urban vegetation on outdoor thermal comfort and microclimate in Phoenix, AZ”

    PubMed Central

    Song, Jiyun; Wang, Zhi-Hua

    2015-01-01

    An advanced Markov-Chain Monte Carlo approach called Subset Simulation is described in Au and Beck (2001) [1] was used to quantify parameter uncertainty and model sensitivity of the urban land-atmospheric framework, viz. the coupled urban canopy model-single column model (UCM-SCM). The results show that the atmospheric dynamics are sensitive to land surface conditions. The most sensitive parameters are dimensional parameters, i.e. roof width, aspect ratio, roughness length of heat and momentum, since these parameters control the magnitude of sensible heat flux. The relative insensitive parameters are hydrological parameters since the lawns or green roofs in urban areas are regularly irrigated so that the water availability for evaporation is never constrained. PMID:26702421

  10. An empirical approach to estimate near-infra-red photon propagation and optically induced drug release in brain tissues

    NASA Astrophysics Data System (ADS)

    Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.

    2015-03-01

    The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.

  11. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  12. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.

    PubMed

    Wolf, Elizabeth Skubak; Anderson, David F

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.

  13. An analysis on the theory of pulse oximetry by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Fan, Shangchun; Cai, Rui; Xing, Weiwei; Liu, Changting; Chen, Guangfei; Wang, Junfeng

    2008-10-01

    The pulse oximetry is a kind of electronic instrument that measures the oxygen saturation of arterial blood and pulse rate by non-invasive techniques. It enables prompt recognition of hypoxemia. In a conventional transmittance type pulse oximeter, the absorption of light by oxygenated and reduced hemoglobin is measured at two wavelength 660nm and 940nm. But the accuracy and measuring range of the pulse oximeter can not meet the requirement of clinical application. There are limitations in the theory of pulse oximetry, which is proved by Monte Carlo method. The mean paths are calculated in the Monte Carlo simulation. The results prove that the mean paths are not the same between the different wavelengths.

  14. Three-Dimensional Dosimetric Validation of a Magnetic Resonance Guided Intensity Modulated Radiation Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rankine, Leith J., E-mail: Leith_Rankine@med.unc.edu; Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina; Mein, Stewart

    Purpose: To validate the dosimetric accuracy of a commercially available magnetic resonance guided intensity modulated radiation therapy (MRgIMRT) system using a hybrid approach: 3-dimensional (3D) measurements and Monte Carlo calculations. Methods and Materials: We used PRESAGE radiochromic plastic dosimeters with remote optical computed tomography readout to perform 3D high-resolution measurements, following a novel remote dosimetry protocol. We followed the intensity modulated radiation therapy commissioning recommendations of American Association of Physicists in Medicine Task Group 119, adapted to incorporate 3D data. Preliminary tests (“AP” and “3D-Bands”) were delivered to 9.5-cm usable diameter cylindrical PRESAGE dosimeters to validate the treatment planning systemmore » (TPS) for nonmodulated deliveries; assess the sensitivity, uniformity, and rotational symmetry of the PRESAGE dosimeters; and test the robustness of the remote dosimetry protocol. Following this, 4 clinical MRgIMRT plans (“MultiTarget,” “Prostate,” “Head/Neck,” and “C-Shape”) were measured using 13-cm usable diameter PRESAGE dosimeters. For all plans, 3D-γ (3% or 3 mm global, 10% threshold) passing rates were calculated and 3D-γ maps were examined. Point doses were measured with an IBA-CC01 ionization chamber for validation of absolute dose. Finally, by use of an in-house-developed, GPU-accelerated Monte Carlo algorithm (gPENELOPE), we independently calculated dose for all 6 Task Group 119 plans and compared against the TPS. Results: For PRESAGE measurements, 3D-γ analysis yielded passing rates of 98.7%, 99.2%, 98.5%, 98.0%, 99.2%, and 90.7% for AP, 3D-Bands, MultiTarget, Prostate, Head/Neck, and C-Shape, respectively. Ion chamber measurements were within an average of 0.5% (±1.1%) from the TPS dose. Monte Carlo calculations demonstrated good agreement with the TPS, with a mean 3D-γ passing rate of 98.5% ± 1.9% using a stricter 2%/2-mm criterion. Conclusions: We have validated the dosimetric accuracy of a commercial MRgIMRT system using high-resolution 3D techniques. We have demonstrated for the first time that hybrid 3D remote dosimetry is a comprehensive and feasible approach to commissioning MRgIMRT. This may provide better sensitivity in error detection compared with standard 2-dimensional measurements and could be used when implementing complex new magnetic resonance guided radiation therapy technologies.« less

  15. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    NASA Astrophysics Data System (ADS)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.

  16. Fast orthogonal transforms and generation of Brownian paths

    PubMed Central

    Leobacher, Gunther

    2012-01-01

    We present a number of fast constructions of discrete Brownian paths that can be used as alternatives to principal component analysis and Brownian bridge for stratified Monte Carlo and quasi-Monte Carlo. By fast we mean that a path of length n can be generated in O(nlog(n)) floating point operations. We highlight some of the connections between the different constructions and we provide some numerical examples. PMID:23471545

  17. SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T. III

    SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.

  18. Detection of Buried Mines and Unexploded Ordnance (UXO)

    DTIC Science & Technology

    2007-04-20

    http://www.apopo.org/newsite/content/index.htm Ref-3 Block, M., R. Medina , and R. Albanese. Analysis of data determining whether European honey...News Service, September 30. http://www.purdue.edu/UNS/html4ever/2005/050930.Cooks.explosives.html Brannon, James. M., Patrick Deliman, Carlos Ruiz...James. M., Patrick Deliman, Carlos Ruiz, Cynthia Price, Mohammed Qasim, Jeffrey A. Gerald, Charolett Hayes, and Sally Yost. 1999. HMX adsorption and TNT

  19. Monte Carlo simulation of a photodisintegration of 3 H experiment in Geant4

    NASA Astrophysics Data System (ADS)

    Gray, Isaiah

    2013-10-01

    An upcoming experiment involving photodisintegration of 3 H at the High Intensity Gamma-Ray Source facility at Duke University has been simulated in the software package Geant4. CAD models of silicon detectors and wire chambers were imported from Autodesk Inventor using the program FastRad and the Geant4 GDML importer. Sensitive detectors were associated with the appropriate logical volumes in the exported GDML file so that changes in detector geometry will be easily manifested in the simulation. Probability distribution functions for the energy and direction of outgoing protons were generated using numerical tables from previous theory, and energies and directions were sampled from these distributions using a rejection sampling algorithm. The simulation will be a useful tool to optimize detector geometry, estimate background rates, and test data analysis algorithms. This work was supported by the Triangle Universities Nuclear Laboratory REU program at Duke University.

  20. The method of pulsed x-ray detection with a diode laser.

    PubMed

    Liu, Jun; Ouyang, Xiaoping; Zhang, Zhongbing; Sheng, Liang; Chen, Liang; Tan, Xinjian; Weng, Xiufeng

    2016-12-01

    A new class of pulsed X-ray detection methods by sensing carrier changes in a diode laser cavity has been presented and demonstrated. The proof-of-principle experiments on detecting pulsed X-ray temporal profile have been done through the diode laser with a multiple quantum well active layer. The result shows that our method can achieve the aim of detecting the temporal profile of a pulsed X-ray source. We predict that there is a minimum value for the pre-bias current of the diode laser by analyzing the carrier rate equation, which exists near the threshold current of the diode laser chip in experiments. This behaviour generally agrees with the characterizations of theoretical analysis. The relative sensitivity is estimated at about 3.3 × 10 -17 C ⋅ cm 2 . We have analyzed the time scale of about 10 ps response with both rate equation and Monte Carlo methods.

Top