Xu, Ying; Cohen Hubal, Elaine A.; Little, John C.
2010-01-01
Background Because of the ubiquitous nature of phthalates in the environment and the potential for adverse human health effects, an urgent need exists to identify the most important sources and pathways of exposure. Objectives Using emissions of di(2-ethylhexyl) phthalate (DEHP) from vinyl flooring (VF) as an illustrative example, we describe a fundamental approach that can be used to identify the important sources and pathways of exposure associated with phthalates in indoor material. Methods We used a three-compartment model to estimate the emission rate of DEHP from VF and the evolving exposures via inhalation, dermal absorption, and oral ingestion of dust in a realistic indoor setting. Results A sensitivity analysis indicates that the VF source characteristics (surface area and material-phase concentration of DEHP), as well as the external mass-transfer coefficient and ventilation rate, are important variables that influence the steady-state DEHP concentration and the resulting exposure. In addition, DEHP is sorbed by interior surfaces, and the associated surface area and surface/air partition coefficients strongly influence the time to steady state. The roughly 40-fold range in predicted exposure reveals the inherent difficulty in using biomonitoring to identify specific sources of exposure to phthalates in the general population. Conclusions The relatively simple dependence on source and chemical-specific transport parameters suggests that the mechanistic modeling approach could be extended to predict exposures arising from other sources of phthalates as well as additional sources of other semivolatile organic compounds (SVOCs) such as biocides and flame retardants. This modeling approach could also provide a relatively inexpensive way to quantify exposure to many of the SVOCs used in indoor materials and consumer products. PMID:20123613
Davis, Jonathan H.
2015-03-09
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.
Uncertainty and Sensitivity Analyses Plan
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.
Sensitivity and Uncertainty Analysis Shell
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094
Uncertainty Quantification of Equilibrium Climate Sensitivity
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Brandon, S. T.; Covey, C. C.; Domyancic, D. M.; Johannesson, G.; Klein, R.; Tannahill, J.; Zhang, Y.
2011-12-01
Significant uncertainties exist in the temperature response of the climate system to changes in the levels of atmospheric carbon dioxide. We report progress to quantify the uncertainties of equilibrium climate sensitivity using perturbed parameter ensembles of the Community Earth System Model (CESM). Through a strategic initiative at the Lawrence Livermore National Laboratory, we have been developing uncertainty quantification (UQ) methods and incorporating them into a software framework called the UQ Pipeline. We have applied this framework to generate a large number of ensemble simulations using Latin Hypercube and other schemes to sample up to three dozen uncertain parameters in the atmospheric (CAM) and sea ice (CICE) model components of CESM. The parameters sampled are related to many highly uncertain processes, including deep and shallow convection, boundary layer turbulence, cloud optical and microphysical properties, and sea ice albedo. An extensive ensemble database comprised of more than 46,000 simulated climate-model-years of recent climate conditions has been assembled. This database is being used to train surrogate models of CESM responses and to perform statistical calibrations of the CAM and CICE models given observational data constraints. The calibrated models serve as a basis for propagating uncertainties forward through climate change simulations using a slab ocean model configuration of CESM. This procedure is being used to quantify the probability density function of equilibrium climate sensitivity accounting for uncertainties in climate model processes. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013. (LLNL-ABS-491765)
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Uncertainty and Sensitivity in Surface Dynamics Modeling
NASA Astrophysics Data System (ADS)
Kettner, Albert J.; Syvitski, James P. M.
2016-05-01
Papers for this special issue on 'Uncertainty and Sensitivity in Surface Dynamics Modeling' heralds from papers submitted after the 2014 annual meeting of the Community Surface Dynamics Modeling System or CSDMS. CSDMS facilitates a diverse community of experts (now in 68 countries) that collectively investigate the Earth's surface-the dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere, by promoting, developing, supporting and disseminating integrated open source software modules. By organizing more than 1500 researchers, CSDMS has the privilege of identifying community strengths and weaknesses in the practice of software development. We recognize, for example, that progress has been slow on identifying and quantifying uncertainty and sensitivity in numerical modeling of earth's surface dynamics. This special issue is meant to raise awareness for these important subjects and highlight state-of-the-art progress.
Temperature targets revisited under climate sensitivity uncertainty
NASA Astrophysics Data System (ADS)
Neubersch, Delf; Roth, Robert; Held, Hermann
2015-04-01
While the 2° target has become an official goal of the COP (Conference of the Parties) process recent work has shown that it requires re-interpretation if climate sensitivity uncertainty in combination with anticipated future learning is considered (Schmidt et al., 2011). A strict probabilistic limit as suggested by the Copenhagen diagnosis may lead to conceptual flaws in view of future learning such a negative expected value of information or even ill-posed policy recommendations. Instead Schmidt et al. suggest trading off the probabilistic transgression of a temperature target against mitigation-induced welfare losses and call this procedure cost risk analysis (CRA). Here we spell out CRA for the integrated assessment model MIND and derive necessary conditions for the exact nature of that trade-off. With CRA at hand it is for the first time that the expected value of climate information, for a given temperature target, can meaningfully be assessed. When focusing on a linear risk function as the most conservative of all possible risk functions, we find that 2° target-induced mitigation costs could be reduced by up to 1/3 if the climate response to carbon dioxide emissions were known with certainty, amounting to hundreds of billions of Euros per year (Neubersch et al., 2014). Further benefits of CRA over strictly formulated temperature targets are discussed. References: D. Neubersch, H. Held, A. Otto, Operationalizing climate targets under learning: An application of cost-risk analysis, Climatic Change, 126 (3), 305-318, DOI 10.1007/s10584-014-1223-z (2014). M. G. W. Schmidt, A. Lorenz, H. Held, E. Kriegler, Climate Targets under Uncertainty: Challenges and Remedies, Climatic Change Letters, 104 (3-4), 783-791, DOI 10.1007/s10584-010-9985-4 (2011).
Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.
2012-07-01
Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)
Techniques to quantify the sensitivity of deterministic model uncertainties
Ishigami, T. ); Cazzoli, E. . Nuclear Energy Dept.); Khatib-Rahbar ); Unwin, S.D. )
1989-04-01
Several existing methods for the assessment of the sensitivity of output uncertainty distributions generated by deterministic computer models to the uncertainty distributions assigned to the input parameters are reviewed and new techniques are proposed. Merits and limitations of the various techniques are examined by detailed application to the suppression pool aerosol removal code (SPARC).
Sensitivity analysis for handling uncertainty in an economic evaluation.
Limwattananon, Supon
2014-05-01
To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700
Uncertainty and sensitivity analysis and its applications in OCD measurements
NASA Astrophysics Data System (ADS)
Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio
2009-03-01
This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.
SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data
Williams, Mark L; Rearden, Bradley T
2008-01-01
Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.
Peer review of HEDR uncertainty and sensitivity analyses plan
Hoffman, F.O.
1993-06-01
This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.
Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui
2016-03-01
This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities. PMID:26619051
NASA Astrophysics Data System (ADS)
Carpenter, T. M.; Georgakakos, K. P.; Georgakakos, K. P.
2001-12-01
The current study focuses on the sensitivity of distributed model flow forecast uncertainty to the uncertainty in the radar rainfall input. Various studies estimate a 30 to 100% uncertainty in radar rainfall estimates from the operational NEXRAD radars. This study addresses the following questions: How does this uncertainty in rainfall input impact the flow simulations produced by a hydrologic model? How does this effect compare to the uncertainty in flow forecasts resulting from initial condition and model parametric uncertainty? The hydrologic model used, HRCDHM, is a catchment-based, distributed hydrologic model and accepts hourly precipitation input from the operational WSR-88D weather radar. A GIS is used to process digital terrain data, delineate sub-catchments of a given large watershed, and supply sub-catchment characteristics (subbasin area, stream length, stream slope and channel-network topology) to the hydrologic model components. HRCDHM uses an adaptation of the U.S. NWS operational Sacramento soil moisture accounting model to produce runoff for each sub-catchment within the larger study watershed. Kinematic or Muskingum-Cunge channel routing is implemented to combine and route sub-catchment flows through the channel network. Available spatial soils information is used to vary hydrologic model parameters from sub-catchment to sub-catchment. HRCDHM was applied to the 2,500 km2 Illinois River watershed in Arkansas and Oklahoma with outlet at Tahlequah, Oklahoma. The watershed is under the coverage of the operational WSR-88D radar at Tulsa, Oklahoma. For distributed modeling, the watershed area has been subdivided into sub-catchments with an average area of 80km2. Flow simulations are validated at various gauged locations within the watershed. A Monte Carlo framework was used to assess the sensitivity of the simulated flows to uncertainty in radar input for different radar error distributions (uniform or exponential), and to make comparisons to the flow
Sensitivity and uncertainty studies of the CRAC2 computer code.
Kocher, D C; Ward, R C; Killough, G G; Dunning, D E; Hicks, B B; Hosker, R P; Ku, J Y; Rao, K S
1987-12-01
We have studied the sensitivity of health impacts from nuclear reactor accidents, as predicted by the CRAC2 computer code, to the following sources of uncertainty: (1) the model for plume rise, (2) the model for wet deposition, (3) the meteorological bin-sampling procedure for selecting weather sequences with rain, (4) the dose conversion factors for inhalation as affected by uncertainties in the particle size of the carrier aerosol and the clearance rates of radionuclides from the respiratory tract, (5) the weathering half-time for external ground-surface exposure, and (6) the transfer coefficients for terrestrial foodchain pathways. Predicted health impacts usually showed little sensitivity to use of an alternative plume-rise model or a modified rain-bin structure in bin-sampling. Health impacts often were quite sensitive to use of an alternative wet-deposition model in single-trial runs with rain during plume passage, but were less sensitive to the model in bin-sampling runs. Uncertainties in the inhalation dose conversion factors had important effects on early injuries in single-trial runs. Latent cancer fatalities were moderately sensitive to uncertainties in the weathering half-time for ground-surface exposure, but showed little sensitivity to the transfer coefficients for terrestrial foodchain pathways. Sensitivities of CRAC2 predictions to uncertainties in the models and parameters also depended on the magnitude of the source term, and some of the effects on early health effects were comparable to those that were due only to selection of different sets of weather sequences in bin-sampling. PMID:3444936
Uncertainty and Sensitivity Analyses of Model Predictions of Solute Transport
NASA Astrophysics Data System (ADS)
Skaggs, T. H.; Suarez, D. L.; Goldberg, S. R.
2012-12-01
Soil salinity reduces crop production on about 50% of irrigated lands worldwide. One roadblock to increased use of advanced computer simulation tools for better managing irrigation water and soil salinity is that the models usually do not provide an estimate of the uncertainty in model predictions, which can be substantial. In this work, we investigate methods for putting confidence bounds on HYDRUS-1D simulations of solute leaching in soils. Uncertainties in model parameters estimated with pedotransfer functions are propagated through simulation model predictions using Monte Carlo simulation. Generalized sensitivity analyses indicate which parameters are most significant for quantifying uncertainty. The simulation results are compared with experimentally observed transport variability in a number of large, replicated lysimeters.
Uncertainty and Sensitivity Analyses of Duct Propagation Models
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.
2008-01-01
This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.
Sensitivity and uncertainty analysis applied to the JHR reactivity prediction
Leray, O.; Vaglio-Gaudard, C.; Hudelot, J. P.; Santamarina, A.; Noguere, G.; Di-Salvo, J.
2012-07-01
The on-going AMMON program in EOLE reactor at CEA Cadarache (France) provides experimental results to qualify the HORUS-3D/N neutronics calculation scheme used for the design and safety studies of the new Material Testing Jules Horowitz Reactor (JHR). This paper presents the determination of technological and nuclear data uncertainties on the core reactivity and the propagation of the latter from the AMMON experiment to JHR. The technological uncertainty propagation was performed with a direct perturbation methodology using the 3D French stochastic code TRIPOLI4 and a statistical methodology using the 2D French deterministic code APOLLO2-MOC which leads to a value of 289 pcm (1{sigma}). The Nuclear Data uncertainty propagation relies on a sensitivity study on the main isotopes and the use of a retroactive marginalization method applied to the JEFF 3.1.1 {sup 27}Al evaluation in order to obtain a realistic multi-group covariance matrix associated with the considered evaluation. This nuclear data uncertainty propagation leads to a K{sub eff} uncertainty of 624 pcm for the JHR core and 684 pcm for the AMMON reference configuration core. Finally, transposition and reduction of the prior uncertainty were made using the Representativity method which demonstrates the similarity of the AMMON experiment with JHR (the representativity factor is 0.95). The final impact of JEFF 3.1.1 nuclear data on the Begin Of Life (BOL) JHR reactivity calculated by the HORUS-3D/N V4.0 is a bias of +216 pcm with an associated posterior uncertainty of 304 pcm (1{sigma}). (authors)
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Photogrammetry-Derived National Shoreline: Uncertainty and Sensitivity Analyses
NASA Astrophysics Data System (ADS)
Yao, F.; Parrish, C. E.; Calder, B. R.; Peeri, S.; Rzhanov, Y.
2013-12-01
Tidally-referenced shoreline data serve a multitude of purposes, ranging from nautical charting, to coastal change analysis, wetland migration studies, coastal planning, resource management and emergency management. To assess the suitability of the shoreline for a particular application, end users need not only the best available shoreline, but also reliable estimates of the uncertainty in the shoreline position. NOAA's National Geodetic Survey (NGS) is responsible for mapping the national shoreline depicted on NOAA nautical charts. Previous studies have focused on modeling the uncertainty in NGS shoreline derived from airborne lidar data, but, to date, these methods have not been extended to aerial imagery and photogrammetric shoreline extraction methods, which remain the primary shoreline mapping methods used by NGS. The aim of this study is to develop a rigorous total propagated uncertainty (TPU) model for shoreline compiled from both tide-coordinated and non-tide-coordinated aerial imagery and compiled using photogrammetric methods. The project site encompasses the strait linking Dennys Bay, Whiting Bay and Cobscook Bay in the 'Downeast' Maine coastal region. This area is of interest, due to the ecosystem services it provides, as well as its complex geomorphology. The region is characterized by a large tide range, strong tidal currents, numerous embayments, and coarse-sediment pocket beaches. Statistical methods were used to assess the uncertainty of shoreline in this site mapped using NGS's photogrammetric workflow, as well as to analyze the sensitivity of the mapped shoreline position to a variety of parameters, including elevation gradient in the intertidal zone. The TPU model developed in this work can easily be extended to other areas and may be facilitate estimation of uncertainty in inundation models and marsh migration models.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...
Uncertainty in the analysis of the overall equipment effectiveness on the shop floor
NASA Astrophysics Data System (ADS)
Rößler, M. P.; Abele, E.
2013-06-01
In this article an approach will be presented which supports transparency regarding the effectiveness of manufacturing equipment by combining the fuzzy set theory with the method of the overall equipment effectiveness analysis. One of the key principles of lean production and also a fundamental task in production optimization projects is the prior analysis of the current state of a production system by the use of key performance indicators to derive possible future states. The current state of the art in overall equipment effectiveness analysis is usually performed by cumulating different machine states by means of decentralized data collection without the consideration of uncertainty. In manual data collection or semi-automated plant data collection systems the quality of derived data often diverges and leads optimization teams to distorted conclusions about the real optimization potential of manufacturing equipment. The method discussed in this paper is to help practitioners to get more reliable results in the analysis phase and so better results of optimization projects. Under consideration of a case study obtained results are discussed.
Sensitivity to Uncertainty in Asteroid Impact Risk Assessment
NASA Astrophysics Data System (ADS)
Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.
2015-12-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Storm, T.; Gee, L. S.; Hutt, C. R.; Wilson, D.
2015-04-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution ( R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.
2015-01-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
Sensitivity of global model prediction to initial state uncertainty
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo
The sensitivity of global and North American forecasts to uncertainties in the initial conditions is studied. The Utah Global Model is initialized with reanalysis data sets obtained from the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium- Range Weather Forecasts (ECMWF). The differences between these analyses provide an estimate of initial uncertainty. The influence of certain scales of the initial uncertainty is tested in experiments with initial data change from NCEP to ECMWF reanalysis in a selected spectral band. Experiments are also done to determine the benefits of targeting local regions for forecast errors over North America. In these tests, NCEP initial data are replaced by ECMWF data in the considered region. The accuracy of predictions with initial data from either reanalysis only differs over the mid-latitudes of the Southern Hemisphere, where ECMWF initialized forecasts have somewhat greater skill. Results from the spectral experiments indicate that most of this benefit is explained by initial differences of the longwave components (wavenumbers 0-15). Approximately 67% of the 120-h global forecast difference produced by changing initial data from ECMWF to NCEP reanalyses is due to initial changes only in wavenumbers 0-15, and more than 85% of this difference is produced by initial changes in wavenumbers 0-20. The results suggest that large-scale errors of the initial state may play a more prominent role than suggested in some singular vector analyses, and favor global observational coverage to resolve the long waves. Results from the regional targeting experiments indicate that for forecast errors over North America, a systematic benefit comes only when the ``targeted'' region includes most of the north Pacific, pointing again at large scale errors as being prominent, even for midrange predictions over a local area.
Given the ubiquitous nature of phthalates in the environment and the potential for adverse human health impacts, there is a need to understand the potential human exposure. A three-compartment model is developed to estimate the emission rate of di-2-ethylhexyl phthalate (DEHP) f...
Neil, Louise; Olsson, Nora Choque; Pellicano, Elizabeth
2016-06-01
Guided by a recent theory that proposes fundamental differences in how autistic individuals deal with uncertainty, we investigated the extent to which the cognitive construct 'intolerance of uncertainty' and anxiety were related to parental reports of sensory sensitivities in 64 autistic and 85 typically developing children aged 6-14 years. Intolerance of uncertainty and anxiety explained approximately half the variance in autistic children's sensory sensitivities, but only around a fifth of the variance in typical children's sensory sensitivities. In children with autism only, intolerance of uncertainty remained a significant predictor of children's sensory sensitivities once the effects of anxiety were adjusted for. Our results suggest intolerance of uncertainty is a relevant construct to sensory sensitivities in children with and without autism. PMID:26864157
Sensitivity of collective action to uncertainty about climate tipping points
NASA Astrophysics Data System (ADS)
Barrett, Scott; Dannenberg, Astrid
2014-01-01
Despite more than two decades of diplomatic effort, concentrations of greenhouse gases continue to trend upwards, creating the risk that we may someday cross a threshold for `dangerous' climate change. Although climate thresholds are very uncertain, new research is trying to devise `early warning signals' of an approaching tipping point. This research offers a tantalizing promise: whereas collective action fails when threshold uncertainty is large, reductions in this uncertainty may bring about the behavioural change needed to avert a climate `catastrophe'. Here we present the results of an experiment, rooted in a game-theoretic model, showing that behaviour differs markedly either side of a dividing line for threshold uncertainty. On one side of the dividing line, where threshold uncertainty is relatively large, free riding proves irresistible and trust illusive, making it virtually inevitable that the tipping point will be crossed. On the other side, where threshold uncertainty is small, the incentive to coordinate is strong and trust more robust, often leading the players to avoid crossing the tipping point. Our results show that uncertainty must be reduced to this `good' side of the dividing line to stimulate the behavioural shift needed to avoid `dangerous' climate change.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-01-01
Water Footprint Assessment is a quickly growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of water footprint estimates to changes in important input variables and quantifies the size of uncertainty in water footprint estimates. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat in the Yellow River Basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River Basin in the period considered. The sensitivity and uncertainty analysis focused on the effects on water footprint estimates at basin level (in m3 t-1) of four key input variables: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), and crop calendar. The one-at-a-time method was carried out to analyse the sensitivity of the water footprint of crops to fractional changes of individual input variables. Uncertainties in crop water footprint estimates were quantified through Monte Carlo simulations. The results show that the water footprint of crops is most sensitive to ET0 and Kc, followed by crop calendar and PR. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0 was dominant compared to that of precipitation. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±26% (at 95% confidence level). The sensitivities and uncertainties differ across crop types, with highest sensitivities
Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code
Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S
2005-02-11
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-24
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
Uncertainty and sensitivity assessments of GPS and GIS integrated applications for transportation.
Hong, Sungchul; Vonderohe, Alan P
2014-01-01
Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data. PMID:24518894
Uncertainty and Sensitivity Assessments of GPS and GIS Integrated Applications for Transportation
Hong, Sungchul; Vonderohe, Alan P.
2014-01-01
Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data. PMID:24518894
UNCERTAINTY AND SENSITIVITY ANALYSES FOR VERY HIGH ORDER MODELS
While there may in many cases be high potential for exposure of humans and ecosystems to chemicals released from a source, the degree to which this potential is realized is often uncertain. Conceptually, uncertainties are divided among parameters, model, and modeler during simula...
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
NASA Astrophysics Data System (ADS)
Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi
2016-04-01
Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.
TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE
Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott
2009-01-01
This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.
Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...
Users manual for the FORSS sensitivity and uncertainty analysis code system
Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.
1981-01-01
FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.
Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells
Foad, Basma; Takeda, Toshikazu
2015-12-31
The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO{sub 2} and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.
PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT
Seitz, R
2008-06-25
Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.
Calculating Sensitivities, Response and Uncertainties Within LODI for Precipitation Scavenging
Loosmore, G; Hsieh, H; Grant, K
2004-01-21
This paper describes an investigation into the uses of first-order, local sensitivity analysis in a Lagrangian dispersion code. The goal of the project is to gain knowledge not only about the sensitivity of the dispersion code predictions to the specific input parameters of interest, but also to better understand the uses and limitations of sensitivity analysis within such a context. The dispersion code of interest here is LODI, which is used for modeling emergency release scenarios at the Department of Energy's National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory. The NARAC system provides both real-time operational predictions and detailed assessments for atmospheric releases of hazardous materials. LODI is driven by a meteorological data assimilation model and an in-house version of COAMPS, the Naval Research Laboratory's mesoscale weather forecast model.
Uncertainty and Sensitivity Analysis in Performance Assessment for the Waste Isolation Pilot Plant
Helton, J.C.
1998-12-17
The Waste Isolation Pilot Plant (WIPP) is under development by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. This development has been supported by a sequence of performance assessments (PAs) carried out by Sandla National Laboratories (SNL) to assess what is known about the WIPP and to provide .tidance for future DOE research and development activities. Uncertainty and sensitivity analysis procedures based on Latin hypercube sampling and regression techniques play an important role in these PAs by providing an assessment of the uncertainty in important analysis outcomes and identi~ing the sources of thk uncertainty. Performance assessments for the WIPP are conceptually and computational] y interesting due to regulatory requirements to assess and display the effects of both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, where stochastic uncertainty arises from the possible disruptions that could occur over the 10,000 yr regulatory period associated with the WIPP and subjective uncertainty arises from an inability to unambi-aously characterize the many models and associated parameters required in a PA for the WIPP. The interplay between uncertainty analysis, sensitivity analysis, stochastic uncertainty and subjective uncertainty are discussed and illustrated in the context of a recent PA carried out by SNL to support an application by the DOE to the U.S. Environmental Protection Agency for the certification of the WIPP for the disposal of TRU waste.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2004-07-15
Part II of this review paper highlights the salient features of the most popular statistical methods currently used for local and global sensitivity and uncertainty analysis of both large-scale computational models and indirect experimental measurements. These statistical procedures represent sampling-based methods (random sampling, stratified importance sampling, and Latin Hypercube sampling), first- and second-order reliability algorithms (FORM and SORM, respectively), variance-based methods (correlation ratio-based methods, the Fourier Amplitude Sensitivity Test, and the Sobol Method), and screening design methods (classical one-at-a-time experiments, global one-at-a-time design methods, systematic fractional replicate designs, and sequential bifurcation designs). It is emphasized that all statistical uncertainty and sensitivity analysis procedures first commence with the 'uncertainty analysis' stage and only subsequently proceed to the 'sensitivity analysis' stage; this path is the exact reverse of the conceptual path underlying the methods of deterministic sensitivity and uncertainty analysis where the sensitivities are determined prior to using them for uncertainty analysis. By comparison to deterministic methods, statistical methods for uncertainty and sensitivity analysis are relatively easier to develop and use but cannot yield exact values of the local sensitivities. Furthermore, current statistical methods have two major inherent drawbacks as follows: 1. Since many thousands of simulations are needed to obtain reliable results, statistical methods are at best expensive (for small systems) or, at worst, impracticable (e.g., for large time-dependent systems).2. Since the response sensitivities and parameter uncertainties are inherently and inseparably amalgamated in the results produced by these methods, improvements in parameter uncertainties cannot be directly propagated to improve response uncertainties; rather, the entire set of simulations and
Modelling survival: exposure pattern, species sensitivity and uncertainty
NASA Astrophysics Data System (ADS)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-07-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Modelling survival: exposure pattern, species sensitivity and uncertainty.
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans. PMID:27381500
Modelling survival: exposure pattern, species sensitivity and uncertainty
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; Van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans. PMID:27381500
1991-03-12
Version 00 SUSD calculates sensitivity coefficients for one- and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code can perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED).
Robinson, Mike J F; Anselme, Patrick; Suchomel, Kristen; Berridge, Kent C
2015-08-01
Amphetamine and stress can sensitize mesolimbic dopamine-related systems. In Pavlovian autoshaping, repeated exposure to uncertainty of reward prediction can enhance motivated sign-tracking or attraction to a discrete reward-predicting cue (lever-conditioned stimulus; CS+), as well as produce cross-sensitization to amphetamine. However, it remains unknown how amphetamine sensitization or repeated restraint stress interact with uncertainty in controlling CS+ incentive salience attribution reflected in sign-tracking. Here rats were tested in 3 successive phases. First, different groups underwent either induction of amphetamine sensitization or repeated restraint stress, or else were not sensitized or stressed as control groups (either saline injections only, or no stress or injection at all). All next received Pavlovian autoshaping training under either certainty conditions (100% CS-UCS association) or uncertainty conditions (50% CS-UCS association and uncertain reward magnitude). During training, rats were assessed for sign-tracking to the CS+ lever versus goal-tracking to the sucrose dish. Finally, all groups were tested for psychomotor sensitization of locomotion revealed by an amphetamine challenge. Our results confirm that reward uncertainty enhanced sign-tracking attraction toward the predictive CS+ lever, at the expense of goal-tracking. We also reported that amphetamine sensitization promoted sign-tracking even in rats trained under CS-UCS certainty conditions, raising them to sign-tracking levels equivalent to the uncertainty group. Combining amphetamine sensitization and uncertainty conditions did not add together to elevate sign-tracking further above the relatively high levels induced by either manipulation alone. In contrast, repeated restraint stress enhanced subsequent amphetamine-elicited locomotion, but did not enhance CS+ attraction. PMID:26076340
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli
2007-06-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.
Visualization tools for uncertainty and sensitivity analyses on thermal-hydraulic transients
NASA Astrophysics Data System (ADS)
Popelin, Anne-Laure; Iooss, Bertrand
2014-06-01
In nuclear engineering studies, uncertainty and sensitivity analyses of simulation computer codes can be faced to the complexity of the input and/or the output variables. If these variables represent a transient or a spatial phenomenon, the difficulty is to provide tool adapted to their functional nature. In this paper, we describe useful visualization tools in the context of uncertainty analysis of model transient outputs. Our application involves thermal-hydraulic computations for safety studies of nuclear pressurized water reactors.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
NASA Astrophysics Data System (ADS)
Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.
2015-01-15
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
Sensitivity and uncertainty in the effective delayed neutron fraction ({beta}{sub eff})
Kodeli, I. I.
2012-07-01
Precise knowledge of effective delayed neutron fraction ({beta}{sub eff}) and of the corresponding uncertainty is important for reactor safety analysis. The interest in developing the methodology for estimating the uncertainty in {beta}{sub eff} was expressed in the scope of the UAM project of the OECD/NEA. A novel approach for the calculation of the nuclear data sensitivity and uncertainty of the effective delayed neutron fraction is proposed, based on the linear perturbation theory. The method allows the detailed analysis of components of {beta}{sub eff} uncertainty. The procedure was implemented in the SUSD3D sensitivity and uncertainty code applied to several fast neutron benchmark experiments from the ICSBEP and IRPhE databases. According to the JENDL-4 covariance matrices and taking into account the uncertainty in the cross sections and in the prompt and delayed fission spectra the total uncertainty in {beta}eff was found to be of the order of {approx}2 to {approx}3.5 % for the studied fast experiments. (authors)
NASA Astrophysics Data System (ADS)
van den Brink, Cors; Zaadnoordijk, Willem Jan; Burgers, Saskia; Griffioen, Jasper
2008-11-01
SummaryGroundwater quality management relies more and more on models in recent years. These models are used to predict the risk of groundwater contamination for various land uses. This paper presents an assessment of uncertainties and sensitivities to input parameters for a regional model. The model had been set up to improve and facilitate the decision-making process between stakeholders and in a groundwater quality conflict. The stochastic uncertainty and sensitivity analysis comprised a Monte Carlo simulation technique in combination with a Latin hypercube sampling procedure. The uncertainty of the calculated concentrations of nitrate leached into groundwater was assessed for the various combinations of land use, soil type, and depth of the groundwater table in a vulnerable, sandy region in The Netherlands. The uncertainties in the shallow groundwater were used to assess the uncertainty of the nitrate concentration in the abstracted groundwater. The confidence intervals of the calculated nitrate concentrations in shallow groundwater for agricultural land use functions did not overlap with those of non-agricultural land use such as nature, indicating significantly different nitrate leaching in these areas. The model results were sensitive for almost all input parameters analyzed. However, the NSS is considered pretty robust because no shifts in uncertainty between factors occurred between factors towards systematic changes in fertilizer and manure inputs of the scenarios. In view of these results, there is no need to collect more data to allow science based decision-making in this planning process.
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy`s (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology
Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.
2008-01-01
Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Sensitivity and uncertainty in the effective delayed neutron fraction (βeff)
NASA Astrophysics Data System (ADS)
Kodeli, Ivan-Alexander
2013-07-01
Precise knowledge of the effective delayed neutron fraction (βeff) and the corresponding uncertainty is important for nuclear reactor safety analysis. The interest in developing the methodology for estimating the uncertainty in βeff was expressed in the scope of the UAM project of the OECD/NEA. The sensitivity and uncertainty analysis of βeff performed using the standard first-order perturbation code SUSD3D is presented. The sensitivity coefficients of βeff with respect to the basic nuclear data were calculated by deriving Bretscher's k-ratio formula. The procedure was applied to several fast neutron benchmark experiments selected from the ICSBEP and IRPhE databases. According to the JENDL-4.0m covariance matrices and taking into account the uncertainties in the cross-sections and in the prompt and delayed fission spectra the total uncertainty in βeff was found to be in general around 3%, and up to ˜7% for the 233U benchmarks. An approximation was applied to investigate the uncertainty due to the delayed fission neutron spectra. The βeff sensitivity and uncertainty analyses are furthermore demonstrated to be useful for the better understanding and interpretation of the physical phenomena involved. Due to their specific sensitivity profiles the βeff measurements are shown to provide valuable complementary information which could be used in combination with the criticality (keff) measurements for the evaluation and validation of certain nuclear reaction data, such as for example the delayed (and prompt) fission neutron yields and interestingly also the 238U inelastic and elastic scattering cross-sections.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Sensitivity and Uncertainty Analysis in Chemical Mechanisms for Air Quality Modeling
NASA Astrophysics Data System (ADS)
Gao, Dongfen
1995-01-01
Ambient ozone in urban and regional air pollution is a serious environmental problem. Air quality models can be used to predict ozone concentrations and explore control strategies. One important component of such air quality models is a chemical mechanism. Sensitivity and uncertainty analysis play an important role in the evaluation of the performance of air quality models. The uncertainties associated with the RADM2 chemical mechanism in predicted concentrations of O_3, HCHO, H _2rm O_2, PAN, and HNO _3 were estimated. Monte Carlo simulations with Latin Hypercube Sampling were used to estimate the overall uncertainties in concentrations of species of interest, due to uncertainties in chemical parameters. The parameters that were treated as random variables were identified through first-order sensitivity and uncertainty analyses. Recent estimates of uncertainties in rate parameters and product yields were used. The results showed the relative uncertainties in ozone predictions are +/-23-50% (1sigma relative to the mean) in urban cases, and less than +/-20% in rural cases. Uncertainties in HNO_3 concentrations are the smallest, followed by HCHO, O_3 and PAN. Predicted H_2rm O_2 concentrations have the highest uncertainties. Uncertainties in the differences of peak ozone concentrations between base and control cases were also studied. The results show that the uncertainties in the fractional reductions in ozone concentrations were 9-12% with NO_{rm x} control at an ROG/NO_{rm x} ratio of 24:1 and 11-33% with ROG control at an ROG/NO _{rm x} ratio of 6:1. Linear regression analysis of the Monte Carlo results showed that uncertainties in rate parameters for the formation of HNO_3, for the reaction of HCHO + hv to 2HO _2 + CO, for PAN chemistry and for the photolysis of NO_2 are most influential to ozone concentrations and differences of ozone. The parameters that are important to ozone concentrations also tend to be relatively influential to other key species
Flood damage maps: ranking sources of uncertainty with variance-based sensitivity analysis
NASA Astrophysics Data System (ADS)
Saint-Geours, N.; Grelot, F.; Bailly, J.-S.; Lavergne, C.
2012-04-01
In order to increase the reliability of flood damage assessment, we need to question the uncertainty associated with the whole flood risk modeling chain. Using a case study on the basin of the Orb River, France, we demonstrate how variance-based sensitivity analysis can be used to quantify uncertainty in flood damage maps at different spatial scales and to identify the sources of uncertainty which should be reduced first. Flood risk mapping is recognized as an effective tool in flood risk management and the elaboration of flood risk maps is now required for all major river basins in the European Union (European directive 2007/60/EC). Flood risk maps can be based on the computation of the Mean Annual Damages indicator (MAD). In this approach, potential damages due to different flood events are estimated for each individual stake over the study area, then averaged over time - using the return period of each flood event - and finally mapped. The issue of uncertainty associated with these flood damage maps should be carefully scrutinized, as they are used to inform the relevant stakeholders or to design flood mitigation measures. Maps of the MAD indicator are based on the combination of hydrological, hydraulic, geographic and economic modeling efforts: as a result, numerous sources of uncertainty arise in their elaboration. Many recent studies describe these various sources of uncertainty (Koivumäki 2010, Bales 2009). Some authors propagate these uncertainties through the flood risk modeling chain and estimate confidence bounds around the resulting flood damage estimates (de Moel 2010). It would now be of great interest to go a step further and to identify which sources of uncertainty account for most of the variability in Mean Annual Damages estimates. We demonstrate the use of variance-based sensitivity analysis to rank sources of uncertainty in flood damage mapping and to quantify their influence on the accuracy of flood damage estimates. We use a quasi
NASA Astrophysics Data System (ADS)
Lee, L. A.; Carslaw, K. S.; Pringle, K. J.
2012-04-01
Global aerosol contributions to radiative forcing (and hence climate change) are persistently subject to large uncertainty in successive Intergovernmental Panel on Climate Change (IPCC) reports (Schimel et al., 1996; Penner et al., 2001; Forster et al., 2007). As such more complex global aerosol models are being developed to simulate aerosol microphysics in the atmosphere. The uncertainty in global aerosol model estimates is currently estimated by measuring the diversity amongst different models (Textor et al., 2006, 2007; Meehl et al., 2007). The uncertainty at the process level due to the need to parameterise in such models is not yet understood and it is difficult to know whether the added model complexity comes at a cost of high model uncertainty. In this work the model uncertainty and its sources due to the uncertain parameters is quantified using variance-based sensitivity analysis. Due to the complexity of a global aerosol model we use Gaussian process emulation with a sufficient experimental design to make such as a sensitivity analysis possible. The global aerosol model used here is GLOMAP (Mann et al., 2010) and we quantify the sensitivity of numerous model outputs to 27 expertly elicited uncertain model parameters describing emissions and processes such as growth and removal of aerosol. Using the R package DiceKriging (Roustant et al., 2010) along with the package sensitivity (Pujol, 2008) it has been possible to produce monthly global maps of model sensitivity to the uncertain parameters over the year 2008. Global model outputs estimated by the emulator are shown to be consistent with previously published estimates (Spracklen et al. 2010, Mann et al. 2010) but now we have an associated measure of parameter uncertainty and its sources. It can be seen that globally some parameters have no effect on the model predictions and any further effort in their development may be unnecessary, although a structural error in the model might also be identified. The
1981-02-02
Version: 00 SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections (of standard multigroup cross-section sets) and for secondary energy distributions (SED's) of multigroup scattering matrices.
Sensitivity Analysis and Uncertainty Propagation in a General-Purpose Thermal Analysis Code
Blackwell, Bennie F.; Dowding, Kevin J.
1999-08-04
Methods are discussed for computing the sensitivity of field variables to changes in material properties and initial/boundary condition parameters for heat transfer problems. The method we focus on is termed the ''Sensitivity Equation Method'' (SEM). It involves deriving field equations for sensitivity coefficients by differentiating the original field equations with respect to the parameters of interest and numerically solving the resulting sensitivity field equations. Uncertainty in the model parameters are then propagated through the computational model using results derived from first-order perturbation theory; this technique is identical to the methodology typically used to propagate experimental uncertainty. Numerical results are presented for the design of an experiment to estimate the thermal conductivity of stainless steel using transient temperature measurements made on prototypical hardware of a companion contact conductance experiment. Comments are made relative to extending the SEM to conjugate heat transfer problems.
Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh
2010-10-01
Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information
Sensitivity and uncertainty analyses for thermo-hydraulic calculation of research reactor
Hartini, Entin; Andiwijayakusuma, Dinan; Isnaeni, Muh Darwis
2013-09-09
The sensitivity and uncertainty analysis of input parameters on thermohydraulic calculations for a research reactor has successfully done in this research. The uncertainty analysis was carried out on input parameters for thermohydraulic calculation of sub-channel analysis using Code COOLOD-N. The input parameters include radial peaking factor, the increase bulk coolant temperature, heat flux factor and the increase temperature cladding and fuel meat at research reactor utilizing plate fuel element. The input uncertainty of 1% - 4% were used in nominal power calculation. The bubble detachment parameters were computed for S ratio (the safety margin against the onset of flow instability ratio) which were used to determine safety level in line with the design of 'Reactor Serba Guna-G. A. Siwabessy' (RSG-GA Siwabessy). It was concluded from the calculation results that using the uncertainty input more than 3% was beyond the safety margin of reactor operation.
Sensitivity and uncertainty analyses for thermo-hydraulic calculation of research reactor
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan; Isnaeni, Muh Darwis
2013-09-01
The sensitivity and uncertainty analysis of input parameters on thermohydraulic calculations for a research reactor has successfully done in this research. The uncertainty analysis was carried out on input parameters for thermohydraulic calculation of sub-channel analysis using Code COOLOD-N. The input parameters include radial peaking factor, the increase bulk coolant temperature, heat flux factor and the increase temperature cladding and fuel meat at research reactor utilizing plate fuel element. The input uncertainty of 1% - 4% were used in nominal power calculation. The bubble detachment parameters were computed for S ratio (the safety margin against the onset of flow instability ratio) which were used to determine safety level in line with the design of "Reactor Serba Guna-G. A. Siwabessy" (RSG-GA Siwabessy). It was concluded from the calculation results that using the uncertainty input more than 3% was beyond the safety margin of reactor operation.
Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi
2011-01-01
This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants. PMID:20828785
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
NASA Astrophysics Data System (ADS)
Schunker, H.; Schou, J.; Ball, W. H.
2016-02-01
Aims: We quantify the effect of observational spectroscopic and asteroseismic uncertainties on regularised least squares (RLS) inversions for the radial differential rotation of Sun-like and subgiant stars. Methods: We first solved the forward problem to model rotational splittings plus the observed uncertainties for models of a Sun-like star, HD 52265, and a subgiant star, KIC 7341231. We randomly perturbed the parameters of the stellar models within the uncertainties of the spectroscopic and asteroseismic constraints and used these perturbed stellar models to compute rotational splittings. We experimented with three rotation profiles: solid body rotation, a step function, and a smooth rotation profile decreasing with radius. We then solved the inverse problem to infer the radial differential rotation profile using a RLS inversion and kernels from the best-fit stellar model. We also compared RLS, optimally localised average (OLA) and direct functional fitting inversion techniques. Results: We found that the inversions for Sun-like stars with solar-like radial differential rotation profiles are insensitive to the uncertainties in the stellar models. The uncertainties in the splittings dominate the uncertainties in the inversions and solid body rotation is not excluded. We found that when the rotation rate below the convection zone is increased to six times that of the surface rotation rate the inferred rotation profile excluded solid body rotation. We showed that when we reduced the uncertainties in the splittings by a factor of about 100, the inversion is sensitive to the uncertainties in the stellar model. With the current observational uncertainties, we found that inversions of subgiant stars are sensitive to the uncertainties in the stellar model. Conclusions: Our findings suggest that inversions for the radial differential rotation of subgiant stars would benefit from more tightly constrained stellar models. We conclude that current observational uncertainties
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of the implementation and testing of SUSA at the INL VHTR Project Office.
NASA Astrophysics Data System (ADS)
Djepa, Vera; Badii, Atta
2016-04-01
The sensitivity of weather and climate system to sea ice thickness (SIT), Sea Ice Draft (SID) and Snow Depth (SD) in the Arctic is recognized from various studies. Decrease in SIT will affect atmospheric circulation, temperature, precipitation and wind speed in the Arctic and beyond. Ice thermodynamics and dynamic properties depend strongly on sea Ice Density (ID) and SD. SIT, SID, ID and SD are sensitive to environmental changes in the Polar region and impact the climate system. For accurate forecast of climate change, sea ice mass balance, ocean circulation and sea- atmosphere interactions it is required to have long term records of SIT, SID, SD and ID with errors and uncertainty analyses. The SID, SIT, ID and freeboard (F) have been retrieved from Radar Altimeter (RA) (on board ENVISAT) and IceBridge Laser Altimeter (LA) and validated, using over 10 years -collocated observations of SID and SD in the Arctic, provided from the European Space Agency (ESA CCI sea ice ECV project). Improved algorithms to retrieve SIT from LA and RA have been derived, applying statistical analysis. The snow depth is obtained from AMSR-E/Aqua and NASA IceBridge Snow Depth radar. The sea ice properties of pancake ice have been retrieved from ENVISAT/Synthetic Aperture Radar (ASAR). The uncertainties of the retrieved climate variables have been analysed and the impact of snow depth and sea ice density on retrieved SIT has been estimated. The sensitivity analysis illustrates the impact of uncertainties of input climate variables (ID and SD) on accuracy of the retrieved output variables (SIT and SID). The developed methodology of uncertainty and sensitivity analysis is essential for assessment of the impact of environmental variables on climate change and better understanding of the relationship between input and output variables. The uncertainty analysis quantifies the uncertainties of the model results and the sensitivity analysis evaluates the contribution of each input variable to
Uncertainty and Sensitivity Analyses of a Two-Parameter Impedance Prediction Model
NASA Technical Reports Server (NTRS)
Jones, M. G.; Parrott, T. L.; Watson, W. R.
2008-01-01
This paper presents comparisons of predicted impedance uncertainty limits derived from Monte-Carlo-type simulations with a Two-Parameter (TP) impedance prediction model and measured impedance uncertainty limits based on multiple tests acquired in NASA Langley test rigs. These predicted and measured impedance uncertainty limits are used to evaluate the effects of simultaneous randomization of each input parameter for the impedance prediction and measurement processes. A sensitivity analysis is then used to further evaluate the TP prediction model by varying its input parameters on an individual basis. The variation imposed on the input parameters is based on measurements conducted with multiple tests in the NASA Langley normal incidence and grazing incidence impedance tubes; thus, the input parameters are assigned uncertainties commensurate with those of the measured data. These same measured data are used with the NASA Langley impedance measurement (eduction) processes to determine the corresponding measured impedance uncertainty limits, such that the predicted and measured impedance uncertainty limits (95% confidence intervals) can be compared. The measured reactance 95% confidence intervals encompass the corresponding predicted reactance confidence intervals over the frequency range of interest. The same is true for the confidence intervals of the measured and predicted resistance at near-resonance frequencies, but the predicted resistance confidence intervals are lower than the measured resistance confidence intervals (no overlap) at frequencies away from resonance. A sensitivity analysis indicates the discharge coefficient uncertainty is the major contributor to uncertainty in the predicted impedances for the perforate-over-honeycomb liner used in this study. This insight regarding the relative importance of each input parameter will be used to guide the design of experiments with test rigs currently being brought on-line at NASA Langley.
Technology Transfer Automated Retrieval System (TEKTRAN)
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN
In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites...
Sufficiently elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The ensuing challenge of examining ever more complex, integrated, higher-ord...
PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS
Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...
Technology Transfer Automated Retrieval System (TEKTRAN)
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
PRACTICAL SENSITIVITY AND UNCERTAINTY ANALYSIS TECHNIQUES APPLIED TO AGRICULTURAL SYSTEMS MODELS
Technology Transfer Automated Retrieval System (TEKTRAN)
We present a practical evaluation framework for analysis of two complex, process-based agricultural system models, WEPP and RZWQM. The evaluation framework combines sensitivity analysis and the uncertainty analysis techniques of first order error analysis (FOA) and Monte Carlo simulation with Latin ...
NASA Astrophysics Data System (ADS)
Dai, H.; Ye, M.
2013-12-01
Groundwater contamination has been a serious health and environmental problem in many areas over the world nowadays. Groundwater reactive transport modeling is vital to make predictions of future contaminant reactive transport. However, these predictions are inherently uncertain, and uncertainty is one of the greatest obstacles in groundwater reactive transport. We propose a Bayesian network approach for quantifying the uncertainty and implement the network for a groundwater reactive transport model for illustration. In the Bayesian network, different uncertainty sources are described as uncertain nodes. All the nodes are characterized by multiple states, representing their uncertainty, in the form of continuous or discrete probability distributions that are propagated to the model endpoint, which is the spatial distribution of contaminant concentrations. After building the Bayesian network, uncertainty quantification is conducted through Monte Carlo simulations to obtain probability distributions of the variables of interest. In this study, uncertainty sources include scenario uncertainty, model uncertainty, parameter uncertainty, and data uncertainty. Variance decomposition is used to quantify relative contribution from the various sources to predictive uncertainty. Based on the variance decomposition, the Sobol' global sensitivity index is extended from parametric uncertainty to consider model and scenario uncertainty, and individual parameter sensitivity index is estimated with consideration of multiple models and scenarios. While these new developments are illustrated using a relatively simple groundwater reactive transport model, our methods is applicable to a wide range of models. The results of uncertainty quantification and sensitivity analysis are useful for environmental management and decision-makers to formulate policies and strategies.
NASA Astrophysics Data System (ADS)
Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The
PH Sensitive Polymers for Improving Reservoir Sweep and Conformance Control in Chemical Flooring
Mukul Sharma; Steven Bryant; Chun Huh
2008-03-31
viscoelastic behavior as functions of pH; shear rate; polymer concentration; salinity, including divalent ion effects; polymer molecular weight; and degree of hydrolysis. A comprehensive rheological model was developed for HPAM solution rheology in terms of: shear rate; pH; polymer concentration; and salinity, so that the spatial and temporal changes in viscosity during the polymer flow in the reservoir can be accurately modeled. A series of acid coreflood experiments were conducted to understand the geochemical reactions relevant for both the near-wellbore injection profile control and for conformance control applications. These experiments showed that the use hydrochloric acid as a pre-flush is not viable because of the high reaction rate with the rock. The use of citric acid as a pre-flush was found to be quite effective. This weak acid has a slow rate of reaction with the rock and can buffer the pH to below 3.5 for extended periods of time. With the citric acid pre-flush the polymer could be efficiently propagated through the core in a low pH environment i.e. at a low viscosity. The transport of various HPAM solutions was studied in sandstones, in terms of permeability reduction, mobility reduction, adsorption and inaccessible pore volume with different process variables: injection pH, polymer concentration, polymer molecular weight, salinity, degree of hydrolysis, and flow rate. Measurements of polymer effluent profiles and tracer tests show that the polymer retention increases at the lower pH. A new simulation capability to model the deep-penetrating mobility control or conformance control using pH-sensitive polymer was developed. The core flood acid injection experiments were history matched to estimate geochemical reaction rates. Preliminary scale-up simulations employing linear and radial geometry floods in 2-layer reservoir models were conducted. It is clearly shown that the injection rate of pH-sensitive polymer solutions can be significantly increased by injecting
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
NASA Astrophysics Data System (ADS)
Pastore, Giovanni; Swiler, L. P.; Hales, J. D.; Novascone, S. R.; Perez, D. M.; Spencer, B. W.; Luzzi, L.; Van Uffelen, P.; Williamson, R. L.
2015-01-01
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code with a recently implemented physics-based model for fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information in the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior predictions with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, significantly higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
James, Scott Carlton
2004-08-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS sitescale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
James, Scott Carlton; Zimmerman, Dean Anthony
2003-10-01
Incorporating results from a previously developed finite element model, an uncertainty and parameter sensitivity analysis was conducted using preliminary site-specific data from Horonobe, Japan (data available from five boreholes as of 2003). Latin Hypercube Sampling was used to draw random parameter values from the site-specific measured, or approximated, physicochemical uncertainty distributions. Using pathlengths and groundwater velocities extracted from the three-dimensional, finite element flow and particle tracking model, breakthrough curves for multiple realizations were calculated with the semi-analytical, one-dimensional, multirate transport code, STAMMT-L. A stepwise linear regression analysis using the 5, 50, and 95% breakthrough times as the dependent variables and LHS sampled site physicochemical parameters as the independent variables was used to perform a sensitivity analysis. Results indicate that the distribution coefficients and hydraulic conductivities are the parameters responsible for most of the variation among simulated breakthrough times. This suggests that researchers and data collectors at the Horonobe site should focus on accurately assessing these parameters and quantifying their uncertainty. Because the Horonobe Underground Research Laboratory is in an early phase of its development, this work should be considered as a first step toward an integration of uncertainty and sensitivity analyses with decision analysis.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case.
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification through PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.
NASA Astrophysics Data System (ADS)
Sokolov, A. P.; Monier, E.; Forest, C. E.
2013-12-01
Climate sensitivity and rate of the heat uptake by the deep ocean are two main characteristics of the Climate System defining its response to a prescribed external forcing. We study relative contributions of the uncertainty in these two characteristics by means of numerical simulations with the MIT Earth System Model (MESM) of intermediate complexity. The MESM consists of a 2D (zonally averaged) atmospheric model coupled to an anomaly diffusing ocean model. Probability distributions for climate sensitivity and rate of oceanic heat uptake are obtained using available data on radiative forcing and temperature changes over 20th century. The results from three 400-member ensembles of long-term (years 1860 to 3000) climate simulations for the IPCC RCP6.0 forcing scenario will be presented. The values of climate sensitivity and rate of oceanic heat uptake, used in the first ensemble, were chosen by sampling their joint probability distribution. In the other two ensembles uncertainty in only one characteristic was taken into account, while the median value was used for the other. Results show that contribution of the uncertainty in climate sensitivity and rate of heat uptake by the deep ocean into the overall uncertainty in projected surface warming and sea level rise is time dependent. Contribution of the uncertainty in rate of heat uptake into uncertainty in the projected surface air temperature increase is rather similar to that of the uncertainty in climate sensitivity while forcing is increasing, but it becomes significantly smaller after forcing is stabilized. The magnitude of surface warming at the end of 30th century is defined almost exclusively by the climate sensitivity distribution. In contrast, uncertainty in the heat uptake has a noticeable effect on projected sea level rise for the whole period of simulations.
Sensitivities and Uncertainties Related to Numerics and Building Features in Urban Modeling
Joseph III, Robert Anthony; Slater, Charles O; Evans, Thomas M; Mosher, Scott W; Johnson, Jeffrey O
2011-01-01
Oak Ridge National Laboratory (ORNL) has been engaged in the development and testing of a computational system that would use a grid of activation foil detectors to provide postdetonation forensic information from a nuclear device detonation. ORNL has developed a high-performance, three-dimensional (3-D) deterministic radiation transport code called Denovo. Denovo solves the multigroup discrete ordinates (SN) equations and can output 3-D data in a platform-independent format that can be efficiently analyzed using parallel, high-performance visualization tools. To evaluate the sensitivities and uncertainties associated with the deterministic computational method numerics, a numerical study on the New York City Times Square model was conducted using Denovo. In particular, the sensitivities and uncertainties associated with various components of the calculational method were systematically investigated, including (a) the Legendre polynomial expansion order of the scattering cross sections, (b) the angular quadrature, (c) multigroup energy binning, (d) spatial mesh sizes, (e) the material compositions of the building models, (f) the composition of the foundations upon which the buildings rest (e.g., ground, concrete, or asphalt), and (g) the amount of detail included in the building models. Although Denovo may calculate the idealized model well, there may be uncertainty in the results because of slight departures of the above-named parameters from those used in the idealized calculations. Fluxes and activities at selected locations from perturbed calculations are compared with corresponding values from the idealized or base case to determine the sensitivities associated with specified parameter changes. Results indicate that uncertainties related to numerics can be controlled by using higher fidelity models, but more work is needed to control the uncertainties related to the model.
Uncertainty and Sensitivity analysis of a physically-based landslide model
NASA Astrophysics Data System (ADS)
Yatheendradas, Soni; Kirschbaum, Dalia
2015-04-01
Rainfall-induced landslides are hazardous to life and property. Rain data sources like satellite remote sensors combined with physically-based models of landslide initiation are a potentially economical solution for anticipating and early warning of possible landslide activity. In this work, we explore the output uncertainty of the physically-based USGS model, TRIGRS (Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability) under both an a priori model parameter specification scenario and a model calibration scenario using a powerful stochastic optimization algorithm. We study a set of 50+ historic landslides over the Macon County in North Carolina as an example regional robust analysis. We then conduct a robust multivariate sensitivity analysis of the modeled output to various factors including rainfall forcing, initial and boundary conditions, and model parameters including topographic slope. Satellite rainfall uncertainty distributions are prescribed based on stochastic regressions to benchmark rain values at each location. Information about the most influential factors from sensitivity analysis will help to preferentially direct field work efforts towards associated observations. This will contribute to reducing output uncertainty in future modeling efforts. We also show how we can conveniently reduce model complexity considering negligibly influential factors to maintain example required levels of predictive accuracy and uncertainty.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
Sensitivity and first-step uncertainty analyses for the preferential flow model MACRO.
Dubus, Igor G; Brown, Colin D
2002-01-01
Sensitivity analyses for the preferential flow model MACRO were carried out using one-at-a-time and Monte Carlo sampling approaches. Four different scenarios were generated by simulating leaching to depth of two hypothetical pesticides in a sandy loam and a more structured clay loam soil. Sensitivity of the model was assessed using the predictions for accumulated water percolated at a 1-m depth and accumulated pesticide losses in percolation. Results for simulated percolation were similar for the two soils. Predictions of water volumes percolated were found to be only marginally affected by changes in input parameters and the most influential parameter was the water content defining the boundary between micropores and macropores in this dual-porosity model. In contrast, predictions of pesticide losses were found to be dependent on the scenarios considered and to be significantly affected by variations in input parameters. In most scenarios, predictions for pesticide losses by MACRO were most influenced by parameters related to sorption and degradation. Under specific circumstances, pesticide losses can be largely affected by changes in hydrological properties of the soil. Since parameters were varied within ranges that approximated their uncertainty, a first-step assessment of uncertainty for the predictions of pesticide losses was possible. Large uncertainties in the predictions were reported, although these are likely to have been overestimated by considering a large number of input parameters in the exercise. It appears desirable that a probabilistic framework accounting for uncertainty is integrated into the estimation of pesticide exposure for regulatory purposes. PMID:11837426
Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event
Strydom, Gerhard
2013-01-01
The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less
NASA Astrophysics Data System (ADS)
Singh, R.; Achutarao, K. M.
2014-12-01
Reliable future climate information is a necessary requirement for the scientific and policy making community. Uncertainty due to various sources affects the level of accuracy of climate change projections at different scales, it becomes even complex at regional scale. This study is an attempt to unfold the levels of uncertainty in future climate projections over the Indian Region to add value to the information on mean changes reported in Chaturvedi et al (Curr. Sci.,2012).We examine model projections of temperature and precipitation using output from the CMIP5 database.Using the 'Reliability Ensemble Averaging' (REA, Giorgi and Mearns, J. Climate, 2002) and "Upgraded REA" (Xu et al, Clim. Res.2010) methods with some modifications, we examine the uncertainty in projections for Annual, Indian Summer Monsoon (JJA) and Winter (DJF) seasons under the RCP4.5 and RCP8.5 scenarios. Both methods bring to use the principle of weighting model based projections based on objective model performance criteria - such as biases (both univariate as well as multivariate) in simulating past climate and measures of simulated variability. The sensitivity to these criteriais tested by varying the metrics and weights assigned to them. Sensitivity of metrics to observational uncertainty is also examined at regional, sub-regional and grid point levels.
NASA Astrophysics Data System (ADS)
Silva, J. M. N.; Carreiras, J. M. B.; Rosa, I.; Pereira, J. M. C.
2011-10-01
Annual emissions of CO2, CH4, CO, N2O, and NOx from biomass burning in shifting cultivation systems in tropical Asia, Africa, and America were estimated at national and continental levels as the product of area burned, aboveground biomass, combustion completeness, and emission factor. The total area of shifting cultivation in each country was derived from the Global Land Cover 2000 map, while the area cleared and burned annually was obtained by multiplying the total area by the rotation cycle of shifting cultivation, calculated using cropping and fallow lengths reported in the literature. Aboveground biomass accumulation was estimated as a function of the duration and mean temperature of the growing season, soil texture type, and length of the fallow period. The uncertainty associated with each model variable was estimated, and an uncertainty and sensitivity analysis of greenhouse gas estimates was performed with Monte Carlo and variance decomposition techniques. Our results reveal large uncertainty in emission estimates for all five gases. In the case of CO2, mean (standard deviation) emissions from shifting cultivation in Asia, Africa, and America were estimated at 241 (132), 205 (139), and 295 (197) Tg yr-1, respectively. Combustion completeness and emission factors were the model inputs that contributed the most to the uncertainty of estimates. Our mean estimates are lower than the literature values for atmospheric emission from biomass burning in shifting cultivation systems. Only mean values could be compared since other studies do not provide any measure of uncertainty.
Quantitative uncertainty and sensitivity analysis of a PWR control rod ejection accident
Pasichnyk, I.; Perin, Y.; Velkov, K.
2013-07-01
The paper describes the results of the quantitative Uncertainty and Sensitivity (U/S) Analysis of a Rod Ejection Accident (REA) which is simulated by the coupled system code ATHLET-QUABOX/CUBBOX applying the GRS tool for U/S analysis SUSA/XSUSA. For the present study, a UOX/MOX mixed core loading based on a generic PWR is modeled. A control rod ejection is calculated for two reactor states: Hot Zero Power (HZP) and 30% of nominal power. The worst cases for the rod ejection are determined by steady-state neutronic simulations taking into account the maximum reactivity insertion in the system and the power peaking factor. For the U/S analysis 378 uncertain parameters are identified and quantified (thermal-hydraulic initial and boundary conditions, input parameters and variations of the two-group cross sections). Results for uncertainty and sensitivity analysis are presented for safety important global and local parameters. (authors)
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.
2015-01-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Nasif, Hesham; Neyama, Atsushi
2003-02-26
This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).
Li, W B; Hoeschen, C
2010-01-01
Mathematical models for kinetics of radiopharmaceuticals in humans were developed and are used to estimate the radiation absorbed dose for patients in nuclear medicine by the International Commission on Radiological Protection and the Medical Internal Radiation Dose (MIRD) Committee. However, due to the fact that the residence times used were derived from different subjects, partially even with different ethnic backgrounds, a large variation in the model parameters propagates to a high uncertainty of the dose estimation. In this work, a method was developed for analysing the uncertainty and sensitivity of biokinetic models that are used to calculate the residence times. The biokinetic model of (18)F-FDG (FDG) developed by the MIRD Committee was analysed by this developed method. The sources of uncertainty of all model parameters were evaluated based on the experiments. The Latin hypercube sampling technique was used to sample the parameters for model input. Kinetic modelling of FDG in humans was performed. Sensitivity of model parameters was indicated by combining the model input and output, using regression and partial correlation analysis. The transfer rate parameter of plasma to other tissue fast is the parameter with the greatest influence on the residence time of plasma. Optimisation of biokinetic data acquisition in the clinical practice by exploitation of the sensitivity of model parameters obtained in this study is discussed. PMID:20185457
NASA Astrophysics Data System (ADS)
Bonadonna, Costanza; Biass, Sébastien; Costa, Antonio
2015-04-01
Regardless of the recent advances in geophysical monitoring and real-time quantitative observations of explosive volcanic eruptions, the characterization of tephra deposits remains one of the largest sources of information on Eruption Source Parameters (ESPs) (i.e. plume height, erupted volume/mass, Mass Eruption Rate - MER, eruption duration, Total Grain-Size Distribution - TGSD). ESPs are crucial for the characterization of volcanic systems and for the compilation of comprehensive hazard scenarios but are naturally associated with various degrees of uncertainties that are traditionally not well quantified. Recent studies have highlighted the uncertainties associated with the estimation of ESPs mostly related to: i) the intrinsic variability of the natural system, ii) the observational error and iii) the strategies used to determine physical parameters. Here we review recent studies focused on the characterization of these uncertainties and we present a sensitivity analysis for the determination of ESPs and a systematic investigation to quantify the propagation of uncertainty applied to two case studies. In particular, we highlight the dependence of ESPs on specific observations used as input parameters (i.e. diameter of the largest clasts, thickness measurements, area of isopach contours, deposit density, downwind and crosswind range of isopleth maps, and empirical constants and wind speed for the determination of MER). The highest uncertainty is associated to the estimation of MER and eruption duration and is related to the determination of crosswind range of isopleth maps and the empirical constants used in the empirical parameterization relating MER and plume height. Given the exponential nature of the relation between MER and plume height, the propagation of uncertainty is not symmetrical, and both an underestimation of the empirical constant and an overestimation of plume height have the highest impact on the final outcome. A ± 20% uncertainty on thickness
A guide to uncertainty quantification and sensitivity analysis for cardiovascular applications.
Eck, Vinzenz Gregor; Donders, Wouter Paulus; Sturdy, Jacob; Feinberg, Jonathan; Delhaas, Tammo; Hellevik, Leif Rune; Huberts, Wouter
2016-08-01
As we shift from population-based medicine towards a more precise patient-specific regime guided by predictions of verified and well-established cardiovascular models, an urgent question arises: how sensitive are the model predictions to errors and uncertainties in the model inputs? To make our models suitable for clinical decision-making, precise knowledge of prediction reliability is of paramount importance. Efficient and practical methods for uncertainty quantification (UQ) and sensitivity analysis (SA) are therefore essential. In this work, we explain the concepts of global UQ and global, variance-based SA along with two often-used methods that are applicable to any model without requiring model implementation changes: Monte Carlo (MC) and polynomial chaos (PC). Furthermore, we propose a guide for UQ and SA according to a six-step procedure and demonstrate it for two clinically relevant cardiovascular models: model-based estimation of the fractional flow reserve (FFR) and model-based estimation of the total arterial compliance (CT ). Both MC and PC produce identical results and may be used interchangeably to identify most significant model inputs with respect to uncertainty in model predictions of FFR and CT . However, PC is more cost-efficient as it requires an order of magnitude fewer model evaluations than MC. Additionally, we demonstrate that targeted reduction of uncertainty in the most significant model inputs reduces the uncertainty in the model predictions efficiently. In conclusion, this article offers a practical guide to UQ and SA to help move the clinical application of mathematical models forward. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26475178
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.
Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A
2013-02-01
The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on
Is the Smagorinsky coefficient sensitive to uncertainty in the form of the energy spectrum?
NASA Astrophysics Data System (ADS)
Meldi, M.; Lucor, D.; Sagaut, P.
2011-12-01
We investigate the influence of uncertainties in the shape of the energy spectrum over the Smagorinsky ["General circulation experiments with the primitive equations. I: The basic experiment," Mon. Weather Rev. 91(3), 99 (1963)] subgrid scale model constant CS: the analysis is carried out by a stochastic approach based on generalized polynomial chaos. The free parameters in the considered energy spectrum functional forms are modeled as random variables over bounded supports: two models of the energy spectrum are investigated, namely, the functional form proposed by Pope [Turbulent Flows (Cambridge University Press, Cambridge, 2000)] and by Meyers and Meneveau ["A functional form for the energy spectrum parametrizing bottleneck and intermittency effects," Phys. Fluids 20(6), 065109 (2008)]. The Smagorinsky model coefficient, computed from the algebraic relation presented in a recent work by Meyers and Sagaut ["On the model coefficients for the standard and the variational multi-scale Smagorinsky model," J. Fluid Mech. 569, 287 (2006)], is considered as a stochastic process and is described by numerical tools streaming from the probability theory. The uncertainties are introduced in the free parameters shaping the energy spectrum in correspondence to the large and the small scales, respectively. The predicted model constant is weakly sensitive to the shape of the energy spectrum when large scales uncertainty is considered: if the large-eddy simulation (LES) filter cut is performed in the inertial range, a significant probability to recover values lower in magnitude than the asymptotic Lilly-Smagorinsky model constant is recovered. Furthermore, the predicted model constant occurrences cluster in a compact range of values: the correspondent probability density function rapidly drops to zero approaching the extremes values of the range, which show a significant sensitivity to the LES filter width. The sensitivity of the model constant to uncertainties propagated in the
NASA Astrophysics Data System (ADS)
Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin
2016-03-01
Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This
Risk-sensitive optimal feedback control accounts for sensorimotor behavior under uncertainty.
Nagengast, Arne J; Braun, Daniel A; Wolpert, Daniel M
2010-01-01
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657
Risk-Sensitive Optimal Feedback Control Accounts for Sensorimotor Behavior under Uncertainty
Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.
2010-01-01
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657
1996-11-01
The objectives of the research are to evaluate and calculate the sensitivities and uncertainties that exist in model calculations of atmospheric ozone levels as the result of uncertainties associated with the chemical kinetics and photolysis parameterizations used in the mechanisms and codes. Photochemistry and heterogeneous kinetics are to be included. SRI`s approach uses the Chemkin/Senkin codes from Sandia National Laboratories, which are public software incorporating the latest algorithms for the direct, efficient calculation of the sensitivity coefficients. These codes provide full sets of concentration derivatives with respect to individual rare constants, temperatures, and other species concentration. Full zero-dimensional, time-resolved calculations may thus be performed over a matrix of initial conditions (temperature, pressure, concentration and radiation) representative of the range of stratospheric and tropospheric environments. Conditions, parameters, and concentration are initially obtained from two-dimensional model outputs obtained from colleagues at Lawrence Livermore National Laboratory (LLNL). These results are used mathematically propagate our expert evaluation of the errors associated with individual rate constants to derive uncertainty estimates for the model calculations.
NASA Astrophysics Data System (ADS)
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
NASA Astrophysics Data System (ADS)
Scott, M. J.; Daly, D.; McJeon, H.; Zhou, Y.; Clarke, L.; Rice, J.; Whitney, P.; Kim, S.
2012-12-01
Residential and commercial buildings are a major source of energy consumption and carbon dioxide emissions in the United States, accounting for 41% of energy consumption and 40% of carbon emissions in 2011. Integrated assessment models (IAMs) historically have been used to estimate the impact of energy consumption on greenhouse gas emissions at the national and international level. Increasingly they are being asked to evaluate mitigation and adaptation policies that have a subnational dimension. In the United States, for example, building energy codes are adopted and enforced at the state and local level. Adoption of more efficient appliances and building equipment is sometimes directed or actively promoted by subnational governmental entities for mitigation or adaptation to climate change. The presentation reports on new example results from the Global Change Assessment Model (GCAM) IAM, one of a flexibly-coupled suite of models of human and earth system interactions known as the integrated Regional Earth System Model (iRESM) system. iRESM can evaluate subnational climate policy in the context of the important uncertainties represented by national policy and the earth system. We have added a 50-state detailed U.S. building energy demand capability to GCAM that is sensitive to national climate policy, technology, regional population and economic growth, and climate. We are currently using GCAM in a prototype stakeholder-driven uncertainty characterization process to evaluate regional climate mitigation and adaptation options in a 14-state pilot region in the U.S. upper Midwest. The stakeholder-driven decision process involves several steps, beginning with identifying policy alternatives and decision criteria based on stakeholder outreach, identifying relevant potential uncertainties, then performing sensitivity analysis, characterizing the key uncertainties from the sensitivity analysis, and propagating and quantifying their impact on the relevant decisions. In the
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Parameter uncertainty, sensitivity, and sediment coupling in bioenergetics-based food web models
Barron, M.G.; Cacela, D.; Beltman, D.
1995-12-31
A bioenergetics-based food web model was developed and calibrated using measured PCB water and sediment concentrations in two Great Lakes food webs: Green Bay, Michigan and Lake Ontario. The model incorporated functional based trophic levels and sediment, water, and food chain exposures of PCBs to aquatic biota. Sensitivity analysis indicated the parameters with the greatest influence on PCBs in top predators were lipid content of plankton and benthos, planktivore assimilation efficiency, Kow, prey selection, and ambient temperature. Sediment-associated PCBs were estimated to contribute over 90% of PCBs in benthivores and less than 50% in piscivores. Ranges of PCB concentrations in top predators estimated by Monte Carlo simulation incorporating parameter uncertainty were within one order of magnitude of modal values. Model applications include estimation of exceedences of human and ecological thresholds. The results indicate that point estimates from bioenergetics-based food web models have substantial uncertainty that should be considered in regulatory and scientific applications.
Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations
NASA Astrophysics Data System (ADS)
Stripling, Hayes Franklin
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
NASA Astrophysics Data System (ADS)
Chiao, T.; Nijssen, B.; Stickel, L.; Lettenmaier, D. P.
2013-12-01
Hydrologic modeling is often used to assess the potential impacts of climate change on water availability and quality. A common approach in these studies is to calibrate the selected model(s) to reproduce historic stream flows prior to the application of future climate projections. This approach relies on the implicit assumptions that the sensitivities of these models to meteorological fluctuations will remain relatively constant under climate change and that these sensitivities are similar among models if all models are calibrated to the same historic record. However, even if the models are able to capture the historic variability in hydrological variables, differences in model structure and parameter estimation contribute to the uncertainties in projected runoff, which confounds the incorporation of these results into water resource management decision-making. A better understanding of the variability in hydrologic sensitivities between different models can aid in bounding this uncertainty. In this research, we characterized the hydrologic sensitivities of three watershed-scale land surface models through a case study of the Bull Run watershed in Northern Oregon. The Distributed Hydrology Soil Vegetation Model (DHSVM), Precipitation-Runoff Modeling System (PRMS), and Variable Infiltration Capacity model (VIC) were implemented and calibrated individually to historic streamflow using a common set of long-term, gridded forcings. In addition to analyzing model performances for a historic period, we quantified the temperature sensitivity (defined as change in runoff in response to change in temperature) and precipitation elasticity (defined as change in runoff in response to change in precipitation) of these three models via perturbation of the historic climate record using synthetic experiments. By comparing how these three models respond to changes in climate forcings, this research aims to test the assumption of constant and similar hydrologic sensitivities. Our
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more
Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment
G. Palmiotti; M. Salvatores; G. Aliberti
2007-10-01
Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.
NASA Astrophysics Data System (ADS)
McKinney, S. W.
2015-12-01
Effectiveness of uncertainty quantification (UQ) and sensitivity analysis (SA) has been improved in ASCEM by choosing from a variety of methods to best suit each model. Previously, ASCEM had a small toolset for UQ and SA, leaving out benefits of the many unincluded methods. Many UQ and SA methods are useful for analyzing models with specific characteristics; therefore, programming these methods into ASCEM would have been inefficient. Embedding the R programming language into ASCEM grants access to a plethora of UQ and SA methods. As a result, programming required is drastically decreased, and runtime efficiency and analysis effectiveness are increased relative to each unique model.
An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and
Parameter sensitivity and uncertainty analysis for a storm surge and wave model
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Knighton, J.; Kline, S. W.
2015-10-01
Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.
Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.
2008-05-22
Version 01 SUSD3D 2008 calculates sensitivity coefficients and standard deviation in the calculated detector responses or design parameters of interest due to input cross sections and their uncertainties. One-, two- and three-dimensional transport problems can be studied. Several types of uncertainties can be considered, i.e. those due to (1) neutron/gamma multi-group cross sections, (2) energy-dependent response functions, (3) secondary angular distribution (SAD) or secondary energy distribution (SED) uncertainties. SUSD3D, initially released in 2000, is looselymore » based on the SUSD code by K. Furuta, Y. Oka and S. Kondo from the University of Tokyo in Japan. SUSD 2008 modifications are primarily relevant for the sensitivity calculations of the critical systems and include: o Correction of the sensitivity calculation for prompt fission and number of delayed neutrons per fission (MT=18 and MT=455). o An option allows the re-normalization of the prompt fission spectra covariance matrices to be applied via the "normalization" of the sensitivity profiles. This option is useful in case if the fission spectra covariances (MF=35) used do not comply with the ENDF-6 Format Manual rules. o For the criticality calculations the normalization can be calculated by the code SUSD3D internally. Parameter NORM should be set to 0 in this case. Total number of neutrons per fission (MT=452) sensitivities for all the fissile materials must be requested in the SUSD3D OVERLAY-2 input deck in order to allow the correct normalization. o The cross section data format reading was updated, mostly for critical systems (e.g. MT18 reaction). o Fission spectra uncertainties can be calculated using the file MF35 data processed by the ERROR-J code. o Cross sections can be input directly using input card "xs" (vector data only). o k-eff card was added for subcritical systems. o This version of SUSD3D code is compatible with the single precision DANTSYS code package (CCC-0547/07 and /08, which
Sensitivity of CO2 migration estimation on reservoir temperature and pressure uncertainty
Jordan, Preston; Doughty, Christine
2008-11-01
The density and viscosity of supercritical CO{sub 2} are sensitive to pressure and temperature (PT) while the viscosity of brine is sensitive primarily to temperature. Oil field PT data in the vicinity of WESTCARB's Phase III injection pilot test site in the southern San Joaquin Valley, California, show a range of PT values, indicating either PT uncertainty or variability. Numerical simulation results across the range of likely PT indicate brine viscosity variation causes virtually no difference in plume evolution and final size, but CO{sub 2} density variation causes a large difference. Relative ultimate plume size is almost directly proportional to the relative difference in brine and CO{sub 2} density (buoyancy flow). The majority of the difference in plume size occurs during and shortly after the cessation of injection.
Third Floor Plan, Second Floor Plan, First Floor Plan, Ground ...
Third Floor Plan, Second Floor Plan, First Floor Plan, Ground Floor Plan, West Bunkhouse - Kennecott Copper Corporation, On Copper River & Northwestern Railroad, Kennicott, Valdez-Cordova Census Area, AK
NASA Technical Reports Server (NTRS)
Ruane, Alex C.; Cecil, L. Dewayne; Horton, Radley M.; Gordon, Roman; McCollum, Raymond (Brown, Douglas); Brown, Douglas; Killough, Brian; Goldberg, Richard; Greeley, Adam P.; Rosenzweig, Cynthia
2011-01-01
We present results from a pilot project to characterize and bound multi-disciplinary uncertainties around the assessment of maize (Zea mays) production impacts using the CERES-Maize crop model in a climate-sensitive region with a variety of farming systems (Panama). Segunda coa (autumn) maize yield in Panama currently suffers occasionally from high water stress at the end of the growing season, however under future climate conditions warmer temperatures accelerate crop maturation and elevated CO (sub 2) concentrations improve water retention. This combination reduces end-of-season water stresses and eventually leads to small mean yield gains according to median projections, although accelerated maturation reduces yields in seasons with low water stresses. Calibrations of cultivar traits, soil profile, and fertilizer amounts are most important for representing baseline yields, however sensitivity to all management factors is reduced in an assessment of future yield changes (most dramatically for fertilizers), suggesting that yield changes may be more generalizable than absolute yields. Uncertainty around General Circulation Model (GCM)s' projected changes in rainfall gain in importance throughout the century, with yield changes strongly correlated with growing season rainfall totals. Climate changes are expected to be obscured by the large inter-annual variations in Panamanian climate that will continue to be the dominant influence on seasonal maize yield into the coming decades. The relatively high (A2) and low (B1) emissions scenarios show little difference in their impact on future maize yields until the end of the century. Uncertainties related to the sensitivity of CERES-Maize to carbon dioxide concentrations have a substantial influence on projected changes, and remain a significant obstacle to climate change impacts assessment. Finally, an investigation into the potential of simple statistical yield emulators based upon key climate variables characterizes the
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both
Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2015-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
NASA Astrophysics Data System (ADS)
Zio, Enrico; Apostolakis, George E.
1999-03-01
This paper illustrates an application of sensitivity and uncertainty analysis techniques within a methodology for evaluating environmental restoration technologies. The methodology consists of two main parts: the first part ("analysis") integrates a wide range of decision criteria and impact evaluation techniques in a framework that emphasizes and incorporates input from stakeholders in all aspects of the process. Its products are the rankings of the alternative options for each stakeholder using, essentially, expected utility theory. The second part ("deliberation") utilizes the analytical results of the "analysis" and attempts to develop consensus among the stakeholders in a session in which the stakeholders discuss and evaluate the analytical results. This paper deals with the analytical part of the approach and the uncertainty and sensitivity analyses that were carried out in preparation for the deliberative process. The objective of these investigations was that of testing the robustness of the assessments and of pointing out possible existing sources of disagreements among the participating stakeholders, thus providing insights for the successive deliberative process. Standard techniques, such as differential analysis, Monte Carlo sampling and a two-dimensional policy region analysis proved sufficient for the task.
Sensitivity, Prediction Uncertainty, and Detection Limit for Artificial Neural Network Calibrations.
Allegrini, Franco; Olivieri, Alejandro C
2016-08-01
With the proliferation of multivariate calibration methods based on artificial neural networks, expressions for the estimation of figures of merit such as sensitivity, prediction uncertainty, and detection limit are urgently needed. This would bring nonlinear multivariate calibration methodologies to the same status as the linear counterparts in terms of comparability. Currently only the average prediction error or the ratio of performance to deviation for a test sample set is employed to characterize and promote neural network calibrations. It is clear that additional information is required. We report for the first time expressions that easily allow one to compute three relevant figures: (1) the sensitivity, which turns out to be sample-dependent, as expected, (2) the prediction uncertainty, and (3) the detection limit. The approach resembles that employed for linear multivariate calibration, i.e., partial least-squares regression, specifically adapted to neural network calibration scenarios. As usual, both simulated and real (near-infrared) spectral data sets serve to illustrate the proposal. PMID:27363813
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Simmons, Craig T.
2015-01-01
Real world models of seawater intrusion (SWI) require high computational efforts. This creates computational difficulties for the uncertainty propagation (UP) analysis of these models due the need for repeated numerical simulations in order to adequately capture the underlying statistics that describe the uncertainty in model outputs. Moreover, despite the obvious advantages of moment-independent global sensitivity analysis (SA) methods, these methods have rarely been employed for SWI and other complex groundwater models. The reason is that moment-independent global SA methods involve repeated UP analysis which further becomes computationally demanding. This study proposes the use of non-intrusive polynomial chaos expansions (PCEs) as a means to significantly accelerate UP analysis in SWI numerical modeling studies and shows that despite the highly non-linear and non-smooth input/output relationship that exists in SWI models, non-intrusive PCEs provide a reliable and yet computationally efficient surrogate of the original numerical model. The study illustrates that for the considered two and six dimensional UP problems, PCEs offer a more accurate estimation of the statistics describing the uncertainty in model outputs compared to Monte Carlo simulations based on the original numerical model. This study also shows that the use of non-intrusive PCEs in the estimation of the moment-independent sensitivity indices (i.e. delta indices) decreases the computational time by several orders of magnitude without causing significant loss of accuracy. The use of non-intrusive PCEs for the generation of SWI hazard maps is proposed to extend the practical applications of UP analysis in coastal aquifer management studies.
de Moel, Hans; Bouwer, Laurens M; Aerts, Jeroen C J H
2014-03-01
A central tool in risk management is the exceedance-probability loss (EPL) curve, which denotes the probabilities of damages being exceeded or equalled. These curves are used for a number of purposes, including the calculation of the expected annual damage (EAD), a common indicator for risk. The model calculations that are used to create such a curve contain uncertainties that accumulate in the end result. As a result, EPL curves and EAD calculations are also surrounded by uncertainties. Knowledge of the magnitude and source of these uncertainties helps to improve assessments and leads to better informed decisions. This study, therefore, performs uncertainty and sensitivity analyses for a dike-ring area in the Netherlands, on the south bank of the river Meuse. In this study, a Monte Carlo framework is used that combines hydraulic boundary conditions, a breach growth model, an inundation model, and a damage model. It encompasses the modelling of thirteen potential breach locations and uncertainties related to probability, duration of the flood wave, height of the flood wave, erodibility of the embankment, damage curves, and the value of assets at risk. The assessment includes uncertainty and sensitivity of risk estimates for each individual location, as well as the dike-ring area as a whole. The results show that for the dike ring in question, EAD estimates exhibit a 90% percentile range from about 8 times lower than the median, up to 4.5 times higher than the median. This level of uncertainty can mainly be attributed to uncertainty in depth-damage curves, uncertainty in the probability of a flood event and the duration of the flood wave. There are considerable differences between breach locations, both in the magnitude of the uncertainty, and in its source. This indicates that local characteristics have a considerable impact on uncertainty and sensitivity of flood damage and risk calculations. PMID:24370697
ERIC Educational Resources Information Center
Uljarevic, Mirko; Carrington, Sarah; Leekam, Susan
2016-01-01
This study examined the relations between anxiety and individual characteristics of sensory sensitivity (SS) and intolerance of uncertainty (IU) in mothers of children with ASD. The mothers of 50 children completed the Hospital Anxiety and Depression Scale, the Highly Sensitive Person Scale and the IU Scale. Anxiety was associated with both SS and…
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.
NASA Astrophysics Data System (ADS)
Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matthew; Thurber, Clifford H.; Tung, Sui
2016-04-01
The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-01-01
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC
A comparison of five forest interception models using global sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Linhoss, Anna C.; Siegert, Courtney M.
2016-07-01
Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.
NASA Astrophysics Data System (ADS)
Brandon, S. T.; Domyancic, D. M.; Johnson, B. J.; Nimmakayala, R.; Lucas, D. D.; Tannahill, J.; Christianson, G.; McEnerney, J.; Klein, R.
2011-12-01
A Lawrence Livermore National Laboratory (LLNL) multi-directorate strategic initiative is developing uncertainty quantification (UQ) tools and techniques that are being applied to climate research. The LLNL UQ Pipeline and corresponding computational tools support the ensemble-of-models approach to UQ, and these tools have enabled the production of a comprehensive set of present-day climate calculations using the Community Atmospheric Model (CAM) and, more recently, the Community Earth System Model (CESM) codes. Statistical analysis of the ensemble is made possible by fitting a response surface, or surrogate model, to the ensemble-of-models data. We describe the LLNL UQ Pipeline and techniques that enable the execution and analysis of climate UQ and sensitivities studies on LLNL's high performance computing (HPC) resources. The analysis techniques are applied to an ensemble consisting of 1,000 CAM4 simulations. We also present two methods, direct sampling and bootstrapping, that quantify the errors in the ability of the response function to model the CAM4 ensemble. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. PMID:25459861
Perspectives Gained in an Evaluation of Uncertainty, Sensitivity, and Decision Analysis Software
Davis, F.J.; Helton, J.C.
1999-02-24
The following software packages for uncertainty, sensitivity, and decision analysis were reviewed and also tested with several simple analysis problems: Crystal Ball, RiskQ, SUSA-PC, Analytica, PRISM, Ithink, Stella, LHS, STEPWISE, and JMP. Results from the review and test problems are presented. The study resulted in the recognition of the importance of four considerations in the selection of a software package: (1) the availability of an appropriate selection of distributions, (2) the ease with which data flows through the input sampling, model evaluation, and output analysis process, (3) the type of models that can be incorporated into the analysis process, and (4) the level of confidence in the software modeling and results.
Babendreier, Justin E.; Castleton, Karl J.
2005-08-01
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRA modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .
Sensitivity and uncertainty analysis of a physically-based landslide model
NASA Astrophysics Data System (ADS)
Yatheendradas, S.; Bach Kirschbaum, D.; Baum, R. L.; Godt, J.
2013-12-01
Worldwide, rainfall-induced landslides pose a major threat to life and property. Remotely sensed data combined with physically-based models of landslide initiation are a potentially economical solution for anticipating landslide activity over large, national or multinational areas as a basis for landslide early warning. Detailed high-resolution landslide modeling is challenging due to difficulties in quantifying the complex interaction between rainfall infiltration, surface materials and the typically coarse resolution of available remotely sensed data. These slope-stability models calculate coincident changes in driving and resisting forces at the hillslope level for anticipating landslides. This research seeks to better quantify the uncertainty of these models as well as evaluate their potential for application over large areas through detailed sensitivity analyses. Sensitivity to various factors including model input parameters, boundary and initial conditions, rainfall inputs, and spatial resolution of model inputs is assessed using a probabilistic ensemble setup. We use the physically-based USGS model, TRIGRS (Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability), that has been ported to NASA's high performance Land Information System (LIS) to take advantage of its multiple remote sensing data streams and tools. We apply the TRIGRS model over an example region with available in-situ gage and remotely sensed rainfall (e.g., TRMM: http://pmm.nasa.gov). To make this model applicable even in regions without relevant fine-resolution data, soil depth is estimated using topographic information, and initial water table depth using spatially disaggregated coarse-resolution modeled soil moisture data. The analyses are done across a range of fine spatial resolutions to determine the corresponding trend in the contribution of different factors to the model output uncertainty. This research acts as a guide towards application of such a detailed slope
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Wagener, Thorsten
2016-04-01
Simulations from environmental models are affected by potentially large uncertainties stemming from various sources, including model parameters and observational uncertainty in the input/output data. Understanding the relative importance of such sources of uncertainty is essential to support model calibration, validation and diagnostic evaluation, and to prioritize efforts for uncertainty reduction. Global Sensitivity Analysis (GSA) provides the theoretical framework and the numerical tools to gain this understanding. However, in traditional applications of GSA, model outputs are an aggregation of the full set of simulated variables. This aggregation of propagated uncertainties prior to GSA may lead to a significant loss of information and may cover up local behaviour that could be of great interest. In this work, we propose a time-varying version of a recently developed density-based GSA method, called PAWN, as a viable option to reduce this loss of information. We apply our approach to a medium-complexity hydrological model in order to address two questions: [1] Can we distinguish between the relative importance of parameter uncertainty versus data uncertainty in time? [2] Do these influences change in catchments with different characteristics? The results present the first quantitative investigation on the relative importance of parameter and data uncertainty across time. They also provide a demonstration of the value of time-varying GSA to investigate the propagation of uncertainty through numerical models and therefore guide additional data collection needs and model calibration/assessment.
Chapman, H D; Jeffers, T K
2015-05-01
Five successive flocks of broilers were reared in floor-pens and given different drug programs or were vaccinated against coccidiosis. Oocysts of Eimeria were isolated from the litter of pens during the fifth flock and their sensitivity to salinomycin (Sal) investigated by measuring new oocyst production following infection of medicated and unmedicated birds. Parasites obtained following 5 flocks given Sal were not well-controlled and it was concluded that they were partially resistant to the drug. Parasites obtained following 4 unmedicated flocks and one medicated flock were better controlled by Sal and it was concluded that in the absence of continuous medication there had been an improvement in drug efficacy. Sal almost completely suppressed oocyst production of isolates from treatments in which medication was followed by vaccination, indicating that when a drug program is followed by vaccination, restoration of sensitivity to Sal had occurred. PMID:25796273
Landry, Guillaume; Reniers, Brigitte; Murrer, Lars; Lutgens, Ludy; Bloemen-Van Gurp, Esther; Pignol, Jean-Philippe; Keller, Brian; Beaulieu, Luc; Verhaegen, Frank
2010-10-15
Purpose: The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: {sup 125}I, {sup 103}Pd, {sup 131}Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium D{sub w,m} as opposed to dose to a small mass of medium in medium D{sub m,m}. Methods: Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using {sup 125}I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D{sub 90} values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using {sup 103}Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D{sub 90} values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results: Results from simulation (1) show that variations
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-07-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
NASA Astrophysics Data System (ADS)
Shinohara, M.; Yamada, T.; Kanazawa, T.
2005-12-01
To understand characteristics of large earthquakes occurred in a subduction zone, it is necessary to study an asperity where large earthquakes occur repeatedly. Because observation near an asperity is needed for studies of asperities, ocean bottom seismometer (OBS) is essential to observe seismic waves from earthquakes in subduction area. Since a conventional OBS is designed for high-sensitivity observation, OBS records of large earthquake occurred near OBS are often saturated. To record large amplitude seismic waves, a servo-type accelerometer is suitable. However it was difficult for OBS to use an accelerometer due to large electric power consumption. Recently a servo-type accelerometer with a large dynamic range and low-power consumption is being developed. In addition, a pressure vessel of OBS can contain much more batteries by using a large size titanium sphere. For the long-term sea floor observation of aftershock of the 2004 Sumatra-Andaman earthquake, we installed a small three-component accelerometer in a conventional long-term OBS and obtained both high-sensitivity seismogram and low-sensitivity (strong motion) accelerograms on the sea floor. We used a compact three-component servo-type accelerometer whose weight is 85 grams as a seismic sensor. Measurement range and resolution of the sensor are 3 G and 10-5 G. The sensor was directly attached to the inside of the pressure vessel. Signals from the accelerometer were digitally recorded to Compact Flash memory with 16 bit resolution and a sampling frequency of 100 Hz. The OBS with the accelerometer was deployed on February 24, 2005 in a southern part of the source region of the 2004 Sumatra-Andaman earthquake by R/V Natsushima belonging to JAMSTEC, and recovered on August 3 by R/V Baruna Jaya I belonging to BPPT, Indonesia. The accelerograms were obtained from the deployment to April 13 when the CF memory became full. Although there are some small troubles for the recording, we could obtain low-sensitivity
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Douglass, A. R.
1986-01-01
Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.
NASA Astrophysics Data System (ADS)
Kasprzyk, J. R.; Reed, P. M.; Kirsch, B. R.; Characklis, G. W.
2009-12-01
Risk-based water supply management presents severe cognitive, computational, and social challenges to planning in a changing world. Decision aiding frameworks must confront the cognitive biases implicit to risk, the severe uncertainties associated with long term planning horizons, and the consequent ambiguities that shape how we define and solve water resources planning and management problems. This paper proposes and demonstrates a new interactive framework for sensitivity informed de novo programming. The theoretical focus of our many-objective de novo programming is to promote learning and evolving problem formulations to enhance risk-based decision making. We have demonstrated our proposed de novo programming framework using a case study for a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas. Key decisions in this case study include the purchase of permanent rights to reservoir inflows and anticipatory thresholds for acquiring transfers of water through optioning and spot leases. A 10-year Monte Carlo simulation driven by historical data is used to provide performance metrics for the supply portfolios. The three major components of our methodology include Sobol globoal sensitivity analysis, many-objective evolutionary optimization and interactive tradeoff visualization. The interplay between these components allows us to evaluate alternative design metrics, their decision variable controls and the consequent system vulnerabilities. Our LRGV case study measures water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market. The sensitivity analysis is used interactively over interannual, annual, and monthly time scales to indicate how the problem controls change as a function of the timescale of interest. These results have been used then to improve our exploration and understanding of LRGV costs, vulnerabilities, and the water portfolios’ critical reliability constraints. These results
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
A sensitivity study of s-process: the impact of uncertainties from nuclear reaction rates
NASA Astrophysics Data System (ADS)
Vinyoles, N.; Serenelli, A.
2016-01-01
The slow neutron capture process (s-process) is responsible for the production of about half the elements beyond the Fe-peak. The production sites and the conditions under which the different components of s-process occur are relatively well established. A detailed quantitative understanding of s-process nucleosynthesis may yield light in physical processes, e.g. convection and mixing, taking place in the production sites. For this, it is important that the impact of uncertainties in the nuclear physics is well understood. In this work we perform a study of the sensitivity of s-process nucleosynthesis, with particular emphasis in the main component, on the nuclear reaction rates. Our aims are: to quantify the current uncertainties in the production factors of s-process elements originating from nuclear physics and, to identify key nuclear reactions that require more precise experimental determinations. In this work we studied two different production sites in which s-process occurs with very different neutron exposures: 1) a low-mass extremely metal-poor star during the He-core flash (nn reaching up to values of ∼ 1014cm-3); 2) the TP-AGB phase of a M⊙, Z=0.01 model, the typical site of the main s-process component (nn up to 108 — 109cm-3). In the first case, the main variation in the production of s-process elements comes from the neutron poisons and with relative variations around 30%-50%. In the second, the neutron poison are not as important because of the higher metallicity of the star that actually acts as a seed and therefore, the final error of the abundances are much lower around 10%-25%.
Martelli, Saulo; Valente, Giordano; Viceconti, Marco; Taddei, Fulvia
2015-01-01
Subject-specific musculoskeletal models have become key tools in the clinical decision-making process. However, the sensitivity of the calculated solution to the unavoidable errors committed while deriving the model parameters from the available information is not fully understood. The aim of this study was to calculate the sensitivity of all the kinematics and kinetics variables to the inter-examiner uncertainty in the identification of the lower limb joint models. The study was based on the computer tomography of the entire lower-limb from a single donor and the motion capture from a body-matched volunteer. The hip, the knee and the ankle joint models were defined following the International Society of Biomechanics recommendations. Using a software interface, five expert anatomists identified on the donor's images the necessary bony locations five times with a three-day time interval. A detailed subject-specific musculoskeletal model was taken from an earlier study, and re-formulated to define the joint axes by inputting the necessary bony locations. Gait simulations were run using OpenSim within a Monte Carlo stochastic scheme, where the locations of the bony landmarks were varied randomly according to the estimated distributions. Trends for the joint angles, moments, and the muscle and joint forces did not substantially change after parameter perturbations. The highest variations were as follows: (a) 11° calculated for the hip rotation angle, (b) 1% BW × H calculated for the knee moment and (c) 0.33 BW calculated for the ankle plantarflexor muscles and the ankle joint forces. In conclusion, the identification of the joint axes from clinical images is a robust procedure for human movement modelling and simulation. PMID:24963785
Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology
NASA Astrophysics Data System (ADS)
Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberger, F.; Saltelli, A.; Pagano, A.
2007-05-01
In this paper, we discuss a joint approach to calibration and uncertainty estimation for hydrologic systems that combines a top-down, data-based mechanistic (DBM) modelling methodology; and a bottom-up, reductionist modelling methodology. The combined approach is applied to the modelling of the River Hodder catchment in North-West England. The top-down DBM model provides a well identified, statistically sound yet physically meaningful description of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. These characteristics are defined inductively from the data without prior assumptions about the model structure, other than it is within the generic class of nonlinear differential-delay equations. The bottom-up modelling is developed using the TOPMODEL, whose structure is assumed a priori and is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters. The subsequent exercises in calibration and validation, performed with Generalized Likelihood Uncertainty Estimation (GLUE), are carried out in the light of the GSA and DBM analyses. This allows for the pre-calibration of the the priors used for GLUE, in order to eliminate dynamical features of the TOPMODEL that have little effect on the model output and would be rejected at the structure identification phase of the DBM modelling analysis. In this way, the elements of meaningful subjectivity in the GLUE approach, which allow the modeler to interact in the modelling process by constraining the model to have a specific form prior to calibration, are combined with other more objective, data-based benchmarks for the final uncertainty estimation. GSA plays a major role in building a bridge between the hypothetico-deductive (bottom-up) and inductive (top-down) approaches and helps to improve the
Uncertainty and sensitivity in optode-based shelf-sea net community production estimates
NASA Astrophysics Data System (ADS)
Hull, Tom; Greenwood, Naomi; Kaiser, Jan; Johnson, Martin
2016-02-01
Coastal seas represent one of the most valuable and vulnerable habitats on Earth. Understanding biological productivity in these dynamic regions is vital to understanding how they may influence and be affected by climate change. A key metric to this end is net community production (NCP), the net effect of autotrophy and heterotrophy; however accurate estimation of NCP has proved to be a difficult task. Presented here is a thorough exploration and sensitivity analysis of an oxygen mass-balance-based NCP estimation technique applied to the Warp Anchorage monitoring station, which is a permanently well-mixed shallow area within the River Thames plume. We have developed an open-source software package for calculating NCP estimates and air-sea gas flux. Our study site is identified as a region of net heterotrophy with strong seasonal variability. The annual cumulative net community oxygen production is calculated as (-5 ± 2.5) mol m-2 a-1. Short-term daily variability in oxygen is demonstrated to make accurate individual daily estimates challenging. The effects of bubble-induced supersaturation is shown to have a large influence on cumulative annual estimates and is the source of much uncertainty.
Uncertainty and sensitivity in optode-based shelf-sea net community production estimates
NASA Astrophysics Data System (ADS)
Hull, T.; Greenwood, N.; Kaiser, J.; Johnson, M.
2015-09-01
Coastal seas represent one of the most valuable and vulnerable habitats on Earth. Understanding biological productivity in these dynamic regions is vital to understanding how they may influence and be affected by climate change. A key metric to this end is net community production (NCP), the net effect of autotrophy and hetrotrophy, however accurate estimation of NCP has proved to be a difficult task. Presented here is a thorough exploration and sensitivity analysis of an oxygen mass-balance based NCP estimation technique applied to the Warp Anchorage monitoring station which is a permanently well mixed shallow area within the Thames river plume. We have developed an open source software package for calculating NCP estimates and air-sea gas flux. Our study site is identified as a region of net heteotrophy with strong seasonal variability. The annual cumulative net community oxygen production is calculated as (-5 ± 2.5) mol m-2 a-1. Short term daily variability in oxygen is demonstrated to make accurate individual daily estimates challenging. The effects of bubble induced supersaturation is shown to have a large influence on cumulative annual estimates, and is the source of much uncertainty.
Regional sensitivity analysis of aleatory and epistemic uncertainties on failure probability
NASA Astrophysics Data System (ADS)
Li, Guijie; Lu, Zhenzhou; Lu, Zhaoyan; Xu, Jia
2014-06-01
To analyze the effects of specific regions of the aleatory and epistemic uncertain variables on the failure probability, a regional sensitivity analysis (RSA) technique called contribution to failure probability (CFP) plot is developed in this paper. This RSA technique can detect the important aleatory and epistemic uncertain variables, and also measure the contribution of specific regions of these important input variables to failure probability. When computing the proposed CFP, the aleatory and epistemic uncertain variables are modeled by random and interval variables, respectively. Then based on the hybrid probabilistic and interval model (HPIM) and the basic probability assignments in evidence theory, the failure probability of the structure with aleatory and epistemic uncertainties can be obtained through a successive construction of the second-level limit state function and the corresponding reliability analysis. Kriging method is used to establish the surrogate model of the second-level limit state function to improve the computational efficiency. Two practical examples are employed to test the effectiveness of the proposed RSA technique, and the efficiency and accuracy of the established kriging-based solution.
Sensitivity of power functions to aggregation: Bias and uncertainty in radar rainfall retrieval
NASA Astrophysics Data System (ADS)
Sassi, M. G.; Leijnse, H.; Uijlenhoet, R.
2014-10-01
Rainfall retrieval using weather radar relies on power functions between radar reflectivity Z and rain rate R. The nonlinear nature of these relations complicates the comparison of rainfall estimates employing reflectivities measured at different scales. Transforming Z into R using relations that have been derived for other scales results in a bias and added uncertainty. We investigate the sensitivity of Z-R relations to spatial and temporal aggregation using high-resolution reflectivity fields for five rainfall events. Existing Z-R relations were employed to investigate the behavior of aggregated Z-R relations with scale, the aggregation bias, and the variability of the estimated rain rate. The prefactor and the exponent of aggregated Z-R relations systematically diverge with scale, showing a break that is event-dependent in the temporal domain and nearly constant in space. The systematic error associated with the aggregation bias at a given scale can become of the same order as the corresponding random error associated with intermittent sampling. The bias can be constrained by including information about the variability of Z within a certain scale of aggregation, and is largely captured by simple functions of the coefficient of variation of Z. Several descriptors of spatial and temporal variability of the reflectivity field are presented, to establish the links between variability descriptors and resulting aggregation bias. Prefactors in Z-R relations can be related to multifractal properties of the rainfall field. We find evidence of scaling breaks in the structural analysis of spatial rainfall with aggregation.
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology
NASA Astrophysics Data System (ADS)
Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberge, F.; Saltelli, A.; Pagano, A.
2006-09-01
In this paper, we discuss the problem of calibration and uncertainty estimation for hydrologic systems from two points of view: a bottom-up, reductionist approach; and a top-down, data-based mechanistic (DBM) approach. The two approaches are applied to the modelling of the River Hodder catchment in North-West England. The bottom-up approach is developed using the TOPMODEL, whose structure is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters; and the subsequent exercises in calibration and validation are carried out in the light of this sensitivity analysis. GSA helps to improve the calibration of hydrological models, making their properties more transparent and highlighting mis-specification problems. The DBM model provides a quick and efficient analysis of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. TOPMODEL calibration takes more time and it explains the flow data a little less well than the DBM model. The main differences in the modelling results are in the nature of the models and the flow decomposition they suggest. The "quick'' (63%) and "slow'' (37%) components of the decomposed flow identified in the DBM model show a clear partitioning of the flow, with the quick component apparently accounting for the effects of surface and near surface processes; and the slow component arising from the displacement of groundwater into the river channel (base flow). On the other hand, the two output flow components in TOPMODEL have a different physical interpretation, with a single flow component (95%) accounting for both slow (subsurface) and fast (surface) dynamics, while the other, very small component (5%) is interpreted as an instantaneous surface runoff generated by rainfall falling on areas of saturated soil. The results of
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment
Gerhard Strydom
2011-01-01
The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA, utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the
Ivanova, T.; Laville, C.; Dyrda, J.; Mennerdahl, D.; Golovko, Y.; Raskach, K.; Tsiboulia, A.; Lee, G. S.; Woo, S. W.; Bidaud, A.; Sabouri, P.; Bledsoe, K.; Rearden, B.; Gulliford, J.; Michel-Sendis, F.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)
NASA Astrophysics Data System (ADS)
Ploquin, Nicolas; Song, William; Lau, Harold; Dunscombe, Peter
2005-08-01
The goal of this study was to assess the impact of set-up uncertainty on compliance with the objectives and constraints of an intensity modulated radiation therapy protocol for early stage cancer of the oropharynx. As the convolution approach to the quantitative study of set-up uncertainties cannot accommodate either surface contours or internal inhomogeneities, both of which are highly relevant to sites in the head and neck, we have employed the more resource intensive direct simulation method. The impact of both systematic (variable from 0 to 6 mm) and random (fixed at 2 mm) set-up uncertainties on compliance with the criteria of the RTOG H-0022 protocol has been examined for eight geometrically complex structures: CTV66 (gross tumour volume and palpable lymph nodes suspicious for metastases), CTV54 (lymph node groups or surgical neck levels at risk of subclinical metastases), glottic larynx, spinal cord, brainstem, mandible and left and right parotids. In a probability-based approach, both dose-volume histograms and equivalent uniform doses were used to describe the dose distributions achieved by plans for two patients, in the presence of set-up uncertainty. The equivalent uniform dose is defined to be that dose which, when delivered uniformly to the organ of interest, will lead to the same response as the non-uniform dose under consideration. For systematic set-up uncertainties greater than 2 mm and 5 mm respectively, coverage of the CTV66 and CTV54 could be significantly compromised. Directional sensitivity was observed in both cases. Most organs at risk (except the glottic larynx which did not comply under static conditions) continued to meet the dose constraints up to 4 mm systematic uncertainty for both plans. The exception was the contra lateral parotid gland, which this protocol is specifically designed to protect. Sensitivity to systematic set-up uncertainty of 2 mm was observed for this organ at risk in both clinical plans.
Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.
1988-01-01
In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S/sub N/-transport code ONEDANT, the two-dimensional finite element S/sub N/-transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceeded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed. The goal of this analysis was the determination of the uncertainties of a calculated tritium production per source neutron from lithium along the central Li/sub 2/O rod in the LBM. Considered were the contributions from /sup 1/H, /sup 6/Li, /sup 7/Li, /sup 9/Be, /sup nat/C, /sup 14/N, /sup 16/O, /sup 23/Na, /sup 27/Al, /sup nat/Si, /sup nat/Cr, /sup nat/Fe, /sup nat/Ni, and /sup nat/Pb. 22 refs., 1 fig., 3 tabs.
Sensitivity and uncertainty analysis for the annual P loss estimator (APLE) model
Technology Transfer Automated Retrieval System (TEKTRAN)
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...
Sensitivity and uncertainty analysis for a field-scale P loss model
Technology Transfer Automated Retrieval System (TEKTRAN)
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...
NASA Astrophysics Data System (ADS)
Smith, Michael S.; Hix, W. Raphael; Parete-Koon, Suzanne; Dessieux, Luc; Ma, Zhanwen; Starrfield, Sumner; Bardayan, Daniel W.; Guidry, Michael W.; Smith, Donald L.; Blackmon, Jeffery C.; Mezzacappa, Anthony
2004-12-01
We utilize multiple-zone, post-processing element synthesis calculations to determine the impact of recent ORNL radioactive ion beam measurements on predictions of novae and X-ray burst simulations. We also assess the correlations between all relevant reaction rates and all synthesized isotopes, and translate nuclear reaction rate uncertainties into abundance prediction uncertainties, via a unique Monte Carlo technique.
Floors: Selection and Maintenance.
ERIC Educational Resources Information Center
Berkeley, Bernard
Flooring for institutional, commercial, and industrial use is described with regard to its selection, care, and maintenance. The following flooring and subflooring material categories are discussed--(1) resilient floor coverings, (2) carpeting, (3) masonry floors, (4) wood floors, and (5) "formed-in-place floors". The properties, problems,…
Gul, R; Bernhard, S
2015-11-01
In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. PMID:26367184
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.
1998-09-01
The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide
Understanding ozone response to its precursor emissions is crucial for effective air quality management practices. This nonlinear response is usually simulated using chemical transport models, and the modeling results are affected by uncertainties in emissions inputs. In this stu...
Pruet, J
2007-06-23
This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directory structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.
The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
NASA Astrophysics Data System (ADS)
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Rising, M.E.
2015-01-15
The prompt fission neutron spectrum (PFNS) uncertainties in the n+{sup 239}Pu fission reaction are used to study the impact on several fast critical assemblies modeled in the MCNP6.1 code. The newly developed sensitivity capability in MCNP6.1 is used to compute the k{sub eff} sensitivity coefficients with respect to the PFNS. In comparison, the covariance matrix given in the ENDF/B-VII.1 library is decomposed and randomly sampled realizations of the PFNS are propagated through the criticality calculation, preserving the PFNS covariance matrix. The information gathered from both approaches, including the overall k{sub eff} uncertainty, is statistically analyzed. Overall, the forward and backward approaches agree as expected. The results from a new method appear to be limited by the process used to evaluate the PFNS and is not necessarily a flaw of the method itself. Final thoughts and directions for future work are suggested.
NASA Astrophysics Data System (ADS)
He, R.; Pang, B.
2015-05-01
The increasing water problems and eco-environmental issues of Heihe River basin have attracted widespread attention. In this research, the VIC (Variable Infiltration Capacity) model was selected to simulate the water cycle of the upstream in Heihe River basin. The GLUE (Generalized Likelihood Uncertainty Estimation) method was used to study the sensitivity of the model parameters and the uncertainty of model outputs. The results showed that the Nash-Sutcliffe efficient coefficient was 0.62 in the calibration period and 0.64 in the validation period. Of the seven elected parameters, Dm (maximum baseflow that can occur from the third soil layer), Ws (fraction of the maximum soil moisture of the third soil layer where non-linear baseflow occurs), and d1 (soil depth of the first soil layer), were very sensitive, especially d1. Observed discharges were almost in the range of the 95% predicted confidence range.
Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics
NASA Technical Reports Server (NTRS)
Kawa, S. Randolph; Stolarksi, Richard S.; Douglass, Anne R.; Newman, Paul A.
2008-01-01
Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspects of our understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to simulate these processes in numerical models of chemistry and transport. The fidelity of the models is assessed in comparison with a wide range of observations. These models depend on laboratory-measured kinetic reaction rates and photolysis cross sections to simulate molecular interactions. A typical stratospheric chemistry mechanism has on the order of 50- 100 species undergoing over a hundred intermolecular reactions and several tens of photolysis reactions. The rates of all of these reactions are subject to uncertainty, some substantial. Given the complexity of the models, however, it is difficult to quantify uncertainties in many aspects of system. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluations are applied in random combinations. We determine the key reactions and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.
James, Scott; Cohan, Alexander
2005-08-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS site-scale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. A heterogeneous stochastic model is developed and corresponding flow fields and particle tracks are calculated. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. Finally, a one-dimensional transport abstraction is modeled based on the particle path lengths and the materials through which each particle passes to yield breakthrough curves at the model boundary. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL. Acknowledgement: This project was funded by Japan Nuclear Cycle Development Institute (JNC). This work was conducted jointly between Sandia National Laboratories (SNL) and JNC under a joint JNC/U.S. Department of Energy (DOE) work agreement. Performance assessment calculations were conducted and analyzed at SNL based on a preliminary model by Kashima, Quintessa, and JNC and include significant input from JNC to make sure the results are relevant for the Japanese nuclear waste program.
NASA Astrophysics Data System (ADS)
Pecknold, Sean; Osler, John C.
2012-02-01
Accurate sonar performance prediction modelling depends on a good knowledge of the local environment, including bathymetry, oceanography and seabed properties. The function of rapid environmental assessment (REA) is to obtain relevant environmental data in a tactically relevant time frame, with REA methods categorized by the nature and immediacy of their application, from historical databases through remotely sensed data to in situ acquisition. However, each REA approach is subject to its own set of uncertainties, which are in turn transferred to uncertainty in sonar performance prediction. An approach to quantify and manage this uncertainty has been developed through the definition of sensitivity metrics and Monte Carlo simulations of acoustic propagation using multiple realizations of the marine environment. This approach can be simplified by using a linearized two-point sensitivity measure based on the statistics of the environmental parameters used by acoustic propagation models. The statistical properties of the environmental parameters may be obtained from compilations of historical data, forecast conditions or in situ measurements. During a field trial off the coast of Nova Scotia, a set of environmental data, including oceanographic and geoacoustic parameters, were collected together with acoustic transmission loss data. At the same time, several numerical models to forecast the oceanographic conditions were run for the area, including 5- and 1-day forecasts as well as nowcasts. Data from the model runs are compared to each other and to in situ environmental sampling, and estimates of the environmental uncertainties are calculated. The forecast and in situ data are used with historical geoacoustic databases and geoacoustic parameters collected using REA techniques, respectively, to perform acoustic transmission loss predictions, which are then compared to measured transmission loss. The progression of uncertainties in the marine environment, within and
While there is a high potential for exposure of humans and ecosystems to chemicals released from hazardous waste sites, the degree to which this potential is realized is often uncertain. Conceptually divided among parameter, model, and modeler uncertainties imparted during simula...
Dowdell, S; Grassberger, C; Paganetti, H
2014-06-01
Purpose: Evaluate the sensitivity of intensity-modulated proton therapy (IMPT) lung treatments to systematic and random setup uncertainties combined with motion effects. Methods: Treatment plans with single-field homogeneity restricted to ±20% (IMPT-20%) were compared to plans with no restriction (IMPT-full). 4D Monte Carlo simulations were performed for 10 lung patients using the patient CT geometry with either ±5mm systematic or random setup uncertainties applied over a 35 × 2.5Gy(RBE) fractionated treatment course. Intra-fraction, inter-field and inter-fraction motions were investigated. 50 fractionated treatments with systematic or random setup uncertainties applied to each fraction were generated for both IMPT delivery methods and three energy-dependent spot sizes (big spots - BS σ=18-9mm, intermediate spots - IS σ=11-5mm, small spots - SS σ=4-2mm). These results were compared to a Monte Carlo recalculation of the original treatment plan, with results presented as the difference in EUD (ΔEUD), V{sub 95} (ΔV{sub 95}) and target homogeneity (ΔD{sub 1}–D{sub 99}) between the 4D simulations and the Monte Carlo calculation on the planning CT. Results: The standard deviations in the ΔEUD were 1.95±0.47(BS), 1.85±0.66(IS) and 1.31±0.35(SS) times higher in IMPT-full compared to IMPT-20% when ±5mm systematic setup uncertainties were applied. The ΔV{sub 95} variations were also 1.53±0.26(BS), 1.60±0.50(IS) and 1.38±0.38(SS) times higher for IMPT-full. For random setup uncertainties, the standard deviations of the ΔEUD from 50 simulated fractionated treatments were 1.94±0.90(BS), 2.13±1.08(IS) and 1.45±0.57(SS) times higher in IMPTfull compared to IMPT-20%. For all spot sizes considered, the ΔD{sub 1}-D{sub 99} coincided within the uncertainty limits for the two IMPT delivery methods, with the mean value always higher for IMPT-full. Statistical analysis showed significant differences between the IMPT-full and IMPT-20% dose distributions for the
Gureghian, A.B.; Sagar, B.
1993-12-31
This paper presents a method for sensitivity and uncertainty analyses of a hypothetical nuclear waste repository located in a layer and fractured unconfined aquifer. Groundwater travel time (GWTT) has been selected as the performance measure. The repository is located in the unsaturated zone, and the source of aquifer recharge is due solely to steady infiltration impinging uniformly over the surface area that is to be modeled. The equivalent porous media concept is adopted to model the fractured zone in the flow field. The evaluation of pathlines and travel time of water particles in the flow domain is performed based on a Lagrangian concept. The Bubnov-Galerkin finite-element method is employed to solve the primary flow problem (non-linear), the equation of motion, and the adjoint sensitivity equations. The matrix equations are solved with a Gaussian elimination technique using sparse matrix solvers. The sensitivity measure corresponds to the first derivative of the performance measure (GWTT) with respect to the parameters of the system. The uncertainty in the computed GWTT is quantified by using the first-order second-moment (FOSM) approach, a probabilistic method that relies on the mean and variance of the system parameters and the sensitivity of the performance measure with respect to these parameters. A test case corresponding to a layered and fractured, unconfined aquifer is then presented to illustrate the various features of the method.
Vesselinov, V. V.; Keating, E. H.; Zyvoloski, G. A.
2002-01-01
Predictions and their uncertainty are key aspects of any modeling effort. The prediction uncertainty can be significant when the predictions depend on uncertain system parameters. We analyze prediction uncertainties through constrained nonlinear second-order optimization of an inverse model. The optimized objective function is the weighted squared-difference between observed and simulated system quantities (flux and time-dependent head data). The constraints are defined by the maximization/minimization of the prediction within a given objective-function range. The method is applied in capture-zone analyses of groundwater-supply systems using a three-dimensional numerical model of the Espanola Basin aquifer. We use the finite-element simulator FEHM coupled with parameter-estimation/predictive-analysis code PEST. The model is run in parallel on a multi-processor supercomputer. We estimate sensitivity and uncertainty of model predictions such as capture-zone identification and travel times. While the methodology is extremely powerful, it is numerically intensive.
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Michael Pernice
2012-10-01
Grid-to-rod fretting is the leading cause of fuel failures in pressurized water reactors, and is one of the challenge problems being addressed by the Consortium for Advanced Simulation of Light Water Reactors to guide its efforts to develop a virtual reactor environment. Prior and current efforts in modeling and simulation of grid-to-rod fretting are discussed. Sources of uncertainty in grid-to-rod fretting are also described.
Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics
NASA Technical Reports Server (NTRS)
Kawa, S. Randolph; Stolarski, Richard S.; Douglass, Anne R.; Newman, Paul A.
2008-01-01
Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspect of out understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to to simulate these process in numerical models of chemistry and transport. These models depend on laboratory-measured kinetic reaction rates and photlysis cross section to simulate molecular interactions. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluation are applied in random combinations. We determine the key reaction and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Fieberg, J.; Jenkins, Kurt J.
2005-01-01
Often landmark conservation decisions are made despite an incomplete knowledge of system behavior and inexact predictions of how complex ecosystems will respond to management actions. For example, predicting the feasibility and likely effects of restoring top-level carnivores such as the gray wolf (Canis lupus) to North American wilderness areas is hampered by incomplete knowledge of the predator-prey system processes and properties. In such cases, global sensitivity measures, such as Sobola?? indices, allow one to quantify the effect of these uncertainties on model predictions. Sobola?? indices are calculated by decomposing the variance in model predictions (due to parameter uncertainty) into main effects of model parameters and their higher order interactions. Model parameters with large sensitivity indices can then be identified for further study in order to improve predictive capabilities. Here, we illustrate the use of Sobola?? sensitivity indices to examine the effect of parameter uncertainty on the predicted decline of elk (Cervus elaphus) population sizes following a hypothetical reintroduction of wolves to Olympic National Park, Washington, USA. The strength of density dependence acting on survival of adult elk and magnitude of predation were the most influential factors controlling elk population size following a simulated wolf reintroduction. In particular, the form of density dependence in natural survival rates and the per-capita predation rate together accounted for over 90% of variation in simulated elk population trends. Additional research on wolf predation rates on elk and natural compensations in prey populations is needed to reliably predict the outcome of predatora??prey system behavior following wolf reintroductions.
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.
2015-12-01
Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes
Random vibration sensitivity studies of modeling uncertainties in the NIF structures
Swensen, E.A.; Farrar, C.R.; Barron, A.A.; Cornwell, P.
1996-12-31
The National Ignition Facility is a laser fusion project that will provide an above-ground experimental capability for nuclear weapons effects simulation. This facility will achieve fusion ignition utilizing solid-state lasers as the energy driver. The facility will cover an estimated 33,400 m{sup 2} at an average height of 5--6 stories. Within this complex, a number of beam transport structures will be houses that will deliver the laser beams to the target area within a 50 {micro}m ms radius of the target center. The beam transport structures are approximately 23 m long and reach approximately heights of 2--3 stories. Low-level ambient random vibrations are one of the primary concerns currently controlling the design of these structures. Low level ambient vibrations, 10{sup {minus}10} g{sup 2}/Hz over a frequency range of 1 to 200 Hz, are assumed to be present during all facility operations. Each structure described in this paper will be required to achieve and maintain 0.6 {micro}rad ms laser beam pointing stability for a minimum of 2 hours under these vibration levels. To date, finite element (FE) analysis has been performed on a number of the beam transport structures. Certain assumptions have to be made regarding structural uncertainties in the FE models. These uncertainties consist of damping values for concrete and steel, compliance within bolted and welded joints, and assumptions regarding the phase coherence of ground motion components. In this paper, the influence of these structural uncertainties on the predicted pointing stability of the beam line transport structures as determined by random vibration analysis will be discussed.
NASA Astrophysics Data System (ADS)
Mateus, C.; Tullos, D.
2014-12-01
This study investigated how reservoir performance varied across different hydrogeologic settings and under plausible future climate scenarios. The study was conducted in the Santiam River basin, OR, USA, comparing the North Santiam basin (NSB), with high permeability and extensive groundwater storage, and the South Santiam basin (SSB), with low permeability, little groundwater storage, and rapid runoff response. We applied projections of future temperature and precipitation from global climate models to a rainfall-runoff model, coupled with a formal Bayesian uncertainty analysis, to project future inflow hydrographs as inputs to a reservoir operations model. The performance of reservoir operations was evaluated as the reliability in meeting flood management, spring and summer environmental flows, and hydropower generation objectives. Despite projected increases in winter flows and decreases in summer flows, results suggested little evidence of a response in reservoir operation performance to a warming climate, with the exception of summer flow targets in the SSB. Independent of climate impacts, historical prioritization of reservoir operations appeared to impact reliability, suggesting areas where operation performance may be improved. Results also highlighted how hydrologic uncertainty is likely to complicate planning for climate change in basins with substantial groundwater interactions.
Sensitivity of the photolysis rate to the uncertainties in spectral solar irradiance variability
NASA Astrophysics Data System (ADS)
Sukhodolov, Timofei; Rozanov, Eugene; Bais, Alkiviadis; Tourpali, Kleareti; Shapiro, Alexander; Telford, Paul; Peter, Thomas; Schmutz, Werner
2014-05-01
The state of the stratospheric ozone layer and temperature structure are mostly maintained by the photolytical processes. Therefore, the uncertainties in the magnitude and spectral composition of the spectral solar irradiance (SSI) evolution during the declining phase of 23rd solar cycle have substantial implications for the modeling of the middle atmosphere evolution, leading not only to a pronounced differences in the heating rates but also affecting photolysis rates. To estimate the role of SSI uncertainties we have compared the most important photolysis rates (O2, O3, and NO2) calculated with the reference radiation code libRadtran using SSI for June 2004 and February 2009 obtained from two models (NRL, COSI) and one observation data set based on SORCE observations. We found that below 40 km changes in the ozone and oxygen photolysis can reach several tenths of % caused by the changes of the SSI in the Harley and Huggins bands for ozone and several % for oxygen caused by the changes of the SSI in the Herzberg continuum and Schumann-Runge bands. For the SORCE data set these changes are 2-4 times higher. We have also evaluated the ability of the several photolysis rates calculation methods widely used in atmospheric models to reproduce the absolute values of the photolysis rates and their response to the implied SSI changes. With some remarks all schemes show good results in the middle stratosphere compare to libRadtran. However, in the troposphere and mesosphere there are more noticeable differences.
Rearden, Bradley T; Duhamel, Isabelle; Letang, Eric
2009-01-01
New TSUNAMI tools of SCALE 6, TSURFER and TSAR, are demonstrated to examine the bias effects of small-worth test materials, relative to reference experiments. TSURFER is a data adjustment bias and bias uncertainty assessment tool, and TSAR computes the sensitivity of the change in reactivity between two systems to the cross-section data common to their calculation. With TSURFER, it is possible to examine biases and bias uncertainties in fine detail. For replacement experiments, the application of TSAR to TSUNAMI-3D sensitivity data for pairs of experiments allows the isolation of sources of bias that could otherwise be obscured by materials with more worth in an individual experiment. The application of TSUNAMI techniques in the design of nine reference experiments for the MIRTE program will allow application of these advanced techniques to data acquired in the experimental series. The validation of all materials in a complex criticality safety application likely requires consolidating information from many different critical experiments. For certain materials, such as structural materials or fission products, only a limited number of critical experiments are available, and the fuel and moderator compositions of the experiments may differ significantly from those of the application. In these cases, it is desirable to extract the computational bias of a specific material from an integral keff measurement and use that information to quantify the bias due to the use of the same material in the application system. Traditional parametric and nonparametric methods are likely to prove poorly suited for such a consolidation of specific data components from a diverse set of experiments. An alternative choice for consolidating specific data from numerous sources is a data adjustment tool, like the ORNL tool TSURFER (Tool for Sensitivity/Uncertainty analysis of Response Functionals using Experimental Results) from SCALE 6.1 However, even with TSURFER, it may be difficult to
Nichols, W.E.; Freshley, M.D.
1991-10-01
This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Kearfott, K.J.; Samei, E.; Han, S.
1995-03-01
An error analysis of the effects of the algorithms used to resolve the deep and shallow dose components for mixed fields from multi-element thermoluminescent (TLD) badge systems was undertaken for a commonly used system. Errors were introduced independently into each of the four element readings for a badge, and the effects on the calculated dose equivalents were observed. A normal random number generator was then utilized to introduce simultaneous variations in the element readings for different uncertainties. The Department of Energy Laboratory Accrediatation Program radiation fields were investigated. Problems arising from the discontinuous nature of the algorithm were encountered for a number of radiation sources for which the algorithm misidentified the radiation field. Mixed fields of low energy photons and betas were found to present particular difficulties for the algorithm. The study demonstrates the importance of small fluctuations in the TLD element`s response in a multi-element approach. 24 refs., 5 figs., 7 tabs.
NASA Astrophysics Data System (ADS)
Hill, Mary
2016-04-01
Combining different data types can seem like combining apples and oranges. Yet combining different data types into inverse modeling and uncertainty quantification are important in all types of environmental systems. There are two main methods for combining different data types. - Single objective optimization (SOO) with weighting. - Multi-objective optimization (MOO) in which coefficients for data groups are defined and changed during model development. SOO and MOO are related in that different coefficient values in MOO are equivalent to considering alternative weightings. MOO methods often take many model runs and tend to be much more computationally expensive than SOO, but for SOO the weighting needs to be defined. When alternative models are more important to consider than alternate weightings, SOO can be advantageous (Lu et al. 2012). This presentation considers how to determine the weighting when using SOO. A saltwater intrusion example is used to examine two methods of weighting three data types. The two methods of determining weighting are based on contributions to the objective function, as suggested by Anderson et al. (2015) and error-based weighting, as suggested by Hill and Tiedeman (2007). The consequences of weighting on measures of uncertainty, the importance and interdependence of parameters, and the importance of observations are presented. This work is important to many types of environmental modeling, including climate models, because integrating many kinds of data is often important. The advent of rainfall-runoff models with fewer numerical deamons, such as TOPKAPI and SUMMA, make the convenient model analysis methods used in this work more useful for many hydrologic problems.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a
Parameter sensitivity and uncertainty in SWAT: A comparison across five USDA-ARS watersheds
Technology Transfer Automated Retrieval System (TEKTRAN)
The USDA-ARS Conservation Effects Assessment Project (CEAP) calls for improved understanding of the strengths and weaknesses of watershed-scale, water quality models under a range of climatic, soil, topographic, and land use conditions. Assessing simulation model parameter sensitivity helps establi...
Bignell, L J; Mo, L; Alexiev, D; Hashemi-Nezhad, S R
2010-01-01
Radiation transport simulations of the most probable gamma- and X-ray emissions of (123)I and (54)Mn in a three photomultiplier tube liquid scintillation detector have been carried out. A Geant4 simulation was used to acquire energy deposition spectra and interaction probabilities with the scintillant, as required for absolute activity measurement using the triple to double coincidence ratio (TDCR) method. A sensitivity and uncertainty analysis of the simulation model is presented here. The uncertainty in the Monte Carlo simulation results due to the input parameter uncertainties was found to be more significant than the statistical uncertainty component for a typical number of simulated decay events. The model was most sensitive to changes in the volume of the scintillant. Estimates of the relative uncertainty associated with the simulation outputs due to the combined stochastic and input uncertainties are provided. A Monte Carlo uncertainty analysis of an (123)I TDCR measurement indicated that accounting for the simulation uncertainties increases the uncertainty of efficiency of the logical sum of double coincidence by 5.1%. PMID:20036571
NASA Astrophysics Data System (ADS)
Khodayar-Pardo, Samiro; Lopez-Baeza, Ernesto; Coll Pajaron, M. Amparo
Sensitivity of seasonal weather prediction and extreme precipitation events to soil moisture initialization uncertainty using SMOS soil moisture products (1) S. Khodayar, (2) A. Coll, (2) E. Lopez-Baeza (1) Institute for Meteorology and Climate Research, Karlsruhe Institute of Technology (KIT), Karlsruhe Germany (2) University of Valencia. Earth Physics and Thermodynamics Department. Climatology from Satellites Group Soil moisture is an important variable in agriculture, hydrology, meteorology and related disciplines. Despite its importance, it is complicated to obtain an appropriate representation of this variable, mainly because of its high temporal and spatial variability. SVAT (Soil-Vegetation-Atmosphere-Transfer) models can be used to simulate the temporal behaviour and spatial distribution of soil moisture in a given area and/or state of the art products such as the soil moisture measurements from the SMOS (Soil Moisture and Ocean Salinity) space mission may be also convenient. The potential role of soil moisture initialization and associated uncertainty in numerical weather prediction is illustrated in this study through sensitivity numerical experiments using the SVAT SURFEX model and the non-hydrostatic COSMO model. The aim of this investigation is twofold, (a) to demonstrate the sensitivity of model simulations of convective precipitation to soil moisture initial uncertainty, as well as the impact on the representation of extreme precipitation events, and (b) to assess the usefulness of SMOS soil moisture products to improve the simulation of water cycle components and heavy precipitation events. Simulated soil moisture and precipitation fields are compared with observations and with level-1(~1km), level-2(~15 km) and level-3(~35 km) soil moisture maps generated from SMOS over the Iberian Peninsula, the SMOS validation area (50 km x 50 km, eastern Spain) and selected stations, where in situ measurements are available covering different vegetation cover
NASA Astrophysics Data System (ADS)
Brown, Tristan R.
The revised Renewable Fuel Standard requires the annual blending of 16 billion gallons of cellulosic biofuel by 2022 from zero gallons in 2009. The necessary capacity investments have been underwhelming to date, however, and little is known about the likely composition of the future cellulosic biofuel industry as a result. This dissertation develops a framework for identifying and analyzing the industry's likely future composition while also providing a possible explanation for why investment in cellulosic biofuels capacity has been low to date. The results of this dissertation indicate that few cellulosic biofuel pathways will be economically competitive with petroleum on an unsubsidized basis. Of five cellulosic biofuel pathways considered under 20-year price forecasts with volatility, only two achieve positive mean 20-year net present value (NPV) probabilities. Furthermore, recent exploitation of U.S. shale gas reserves and the subsequent fall in U.S. natural gas prices have negatively impacted the economic competitiveness of all but two of the cellulosic biofuel pathways considered; only two of the five pathways achieve substantially higher 20-year NPVs under a post-shale gas economic scenario relative to a pre-shale gas scenario. The economic competitiveness of cellulosic biofuel pathways with petroleum is reduced further when considered under price uncertainty in combination with realistic financial assumptions. This dissertation calculates pathway-specific costs of capital for five cellulosic biofuel pathway scenarios. The analysis finds that the large majority of the scenarios incur costs of capital that are substantially higher than those commonly assumed in the literature. Employment of these costs of capital in a comparative TEA greatly reduces the mean 20-year NPVs for each pathway while increasing their 10-year probabilities of default to above 80% for all five scenarios. Finally, this dissertation quantifies the economic competitiveness of six
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary
NASA Astrophysics Data System (ADS)
Mockler, Eva M.; O'Loughlin, Fiachra E.; Bruen, Michael
2016-05-01
Increasing pressures on water quality due to intensification of agriculture have raised demands for environmental modeling to accurately simulate the movement of diffuse (nonpoint) nutrients in catchments. As hydrological flows drive the movement and attenuation of nutrients, individual hydrological processes in models should be adequately represented for water quality simulations to be meaningful. In particular, the relative contribution of groundwater and surface runoff to rivers is of interest, as increasing nitrate concentrations are linked to higher groundwater discharges. These requirements for hydrological modeling of groundwater contribution to rivers initiated this assessment of internal flow path partitioning in conceptual hydrological models. In this study, a variance based sensitivity analysis method was used to investigate parameter sensitivities and flow partitioning of three conceptual hydrological models simulating 31 Irish catchments. We compared two established conceptual hydrological models (NAM and SMARG) and a new model (SMART), produced especially for water quality modeling. In addition to the criteria that assess streamflow simulations, a ratio of average groundwater contribution to total streamflow was calculated for all simulations over the 16 year study period. As observations time-series of groundwater contributions to streamflow are not available at catchment scale, the groundwater ratios were evaluated against average annual indices of base flow and deep groundwater flow for each catchment. The exploration of sensitivities of internal flow path partitioning was a specific focus to assist in evaluating model performances. Results highlight that model structure has a strong impact on simulated groundwater flow paths. Sensitivity to the internal pathways in the models are not reflected in the performance criteria results. This demonstrates that simulated groundwater contribution should be constrained by independent data to ensure results
Albrecht, Achim; Miquel, Stéphan
2010-01-01
Biosphere dose conversion factors are computed for the French high-level geological waste disposal concept and to illustrate the combined probabilistic and deterministic approach. Both (135)Cs and (79)Se are used as examples. Probabilistic analyses of the system considering all parameters, as well as physical and societal parameters independently, allow quantification of their mutual impact on overall uncertainty. As physical parameter uncertainties decreased, for example with the availability of further experimental and field data, the societal uncertainties, which are less easily constrained, particularly for the long term, become more and more significant. One also has to distinguish uncertainties impacting the low dose portion of a distribution from those impacting the high dose range, the latter having logically a greater impact in an assessment situation. The use of cumulative probability curves allows us to quantify probability variations as a function of the dose estimate, with the ratio of the probability variation (slope of the curve) indicative of uncertainties of different radionuclides. In the case of (135)Cs with better constrained physical parameters, the uncertainty in human behaviour is more significant, even in the high dose range, where they increase the probability of higher doses. For both radionuclides, uncertainties impact more strongly in the intermediate than in the high dose range. In an assessment context, the focus will be on probabilities of higher dose values. The probabilistic approach can furthermore be used to construct critical groups based on a predefined probability level and to ensure that critical groups cover the expected range of uncertainty. PMID:19758732
Hostetler, S.; Pisias, N.; Mix, A.
2006-01-01
The faunal and floral gradients that underlie the CLIMAP (1981) sea-surface temperature (SST) reconstructions for the Last Glacial Maximum (LGM) reflect ocean temperature gradients and frontal positions. The transfer functions used to reconstruct SSTs from biologic gradients are biased, however, because at the warmest sites they display inherently low sensitivity in translating fauna to SST and they underestimate SST within the euphotic zones where the pycnocline is strong. Here we assemble available data and apply a statistical approach to adjust for hypothetical biases in the faunal-based SST estimates of LGM temperature. The largest bias adjustments are distributed in the tropics (to address low sensitivity) and subtropics (to address underestimation in the euphotic zones). The resulting SSTs are generally in better agreement than CLIMAP with recent geochemical estimates of glacial-interglacial temperature changes. We conducted a series of model experiments using the GENESIS general atmospheric circulation model to assess the sensitivity of the climate system to our bias-adjusted SSTs. Globally, the new SST field results in a modeled LGM surface-air cooling relative to present of 6.4 ??C (1.9 ??C cooler than that of CLIMAP). Relative to the simulation with CLIMAP SSTs, modeled precipitation over the oceans is reduced by 0.4 mm d-1 (an anomaly -0.4 versus 0.0 mm d-1 for CLIMAP) and increased over land (an anomaly -0.2 versus -0.5 mm d-1 for CLIMAP). Regionally strong responses are induced by changes in SST gradients. Data-model comparisons indicate improvement in agreement relative to CLIMAP, but differences among terrestrial data inferences and simulated moisture and temperature remain. Our SSTs result in positive mass balance over the northern hemisphere ice sheets (primarily through reduced summer ablation), supporting the hypothesis that tropical and subtropical ocean temperatures may have played a role in triggering glacial changes at higher latitudes.
J. Zhu; K. Pohlmann; J. Chapman; C. Russell; R.W.H. Carroll; D. Shafer
2009-09-10
Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as the nation’s first permanent geologic repository for spent nuclear fuel and highlevel radioactive waste. In this study, the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to intercept the subsurface of the proposed land withdrawal area for the repository is investigated. The timeframe for advective travel and its uncertainty for possible radionuclide movement along these flow pathways is estimated as a result of effective-porosity value uncertainty for the hydrogeologic units (HGUs) along the flow paths. Furthermore, sensitivity analysis is conducted to determine the most influential HGUs on the advective radionuclide travel times from the NTS to the YM area. Groundwater pathways are obtained using the particle tracking package MODPATH and flow results from the Death Valley regional groundwater flow system (DVRFS) model developed by the U.S. Geological Survey (USGS). Effectiveporosity values for HGUs along these pathways are one of several parameters that determine possible radionuclide travel times between the NTS and proposed YM withdrawal areas. Values and uncertainties of HGU porosities are quantified through evaluation of existing site effective-porosity data and expert professional judgment and are incorporated in the model through Monte Carlo simulations to estimate mean travel times and uncertainties. The simulations are based on two steady-state flow scenarios, the pre-pumping (the initial stress period of the DVRFS model), and the 1998 pumping (assuming steady-state conditions resulting from pumping in the last stress period of the DVRFS model) scenarios for the purpose of long-term prediction and monitoring. The pumping scenario accounts for groundwater withdrawal activities in the Amargosa Desert and other areas downgradient of YM. Considering each detonation in a clustered region around Pahute Mesa (in
NASA Astrophysics Data System (ADS)
DeAngelis, A. M.; Qu, X.; Hall, A. D.; Klein, S. A.
2014-12-01
The hydrological cycle is expected to undergo substantial changes in response to global warming, with all climate models predicting an increase in global-mean precipitation. There is considerable spread among models, however, in the projected increase of global-mean precipitation, even when normalized by surface temperature change. In an attempt to develop a better physical understanding of the causes of this intermodel spread, we investigate the rapid and temperature-mediated responses of global-mean precipitation to CO2 forcing in an ensemble of CMIP5 models by applying regression analysis to pre-industrial and abrupt quadrupled CO2 simulations, and focus on the atmospheric radiative terms that balance global precipitation. The intermodel spread in the temperature-mediated component, which dominates the spread in total hydrological sensitivity, is highly correlated with the spread in temperature-mediated clear-sky shortwave (SW) atmospheric heating among models. Upon further analysis of the sources of intermodel variability in SW heating, we find that increases of upper atmosphere and (to a lesser extent) total column water vapor in response to 1K surface warming only partly explain intermodel differences in the SW response. Instead, most of the spread in the SW heating term is explained by intermodel differences in the sensitivity of SW absorption to fixed changes in column water vapor. This suggests that differences in SW radiative transfer codes among models are the dominant source of variability in the response of atmospheric SW heating to warming. Better understanding of the SW heating sensitivity to water vapor in climate models appears to be critical for reducing uncertainty in the global hydrological response to future warming. Current work entails analysis of observations to potentially constrain the intermodel spread in SW sensitivity to water vapor, as well as more detailed investigation of the radiative transfer schemes in different models and how
Peterson, Kara J.; Bochev, Pavel Blagoveston; Paskaleva, Biliana S.
2010-09-01
Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
NASA Astrophysics Data System (ADS)
Kong, Song-Charng; Reitz, Rolf D.
2003-06-01
This study used a numerical model to investigate the combustion process in a premixed iso-octane homogeneous charge compression ignition (HCCI) engine. The engine was a supercharged Cummins C engine operated under HCCI conditions. The CHEMKIN code was implemented into an updated KIVA-3V code so that the combustion could be modelled using detailed chemistry in the context of engine CFD simulations. The model was able to accurately simulate the ignition timing and combustion phasing for various engine conditions. The unburned hydrocarbon emissions were also well predicted while the carbon monoxide emissions were under predicted. Model results showed that the majority of unburned hydrocarbon is located in the piston-ring crevice region and the carbon monoxide resides in the vicinity of the cylinder walls. A sensitivity study of the computational grid resolution indicated that the combustion predictions were relatively insensitive to the grid density. However, the piston-ring crevice region needed to be simulated with high resolution to obtain accurate emissions predictions. The model results also indicated that HCCI combustion and emissions are very sensitive to the initial mixture temperature. The computations also show that the carbon monoxide emissions prediction can be significantly improved by modifying a key oxidation reaction rate constant.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Ward, R.C.; Kocher, D.C.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.
1985-01-01
We have studied the sensitivity of results from the CRAC2 computer code, which predicts health impacts from a reactor-accident scenario, to uncertainties in selected meteorological models and parameters. The sources of uncertainty examined include the models for plume rise and wet deposition and the meteorological bin-sampling procedure. An alternative plume-rise model usually had little effect on predicted health impacts. In an alternative wet-deposition model, the scavenging rate depends only on storm type, rather than on rainfall rate and atmospheric stability class as in the CRAC2 model. Use of the alternative wet-deposition model in meteorological bin-sampling runs decreased predicted mean early injuries by as much as a factor of 2-3 and, for large release heights and sensible heat rates, decreased mean early fatalities by nearly an order of magnitude. The bin-sampling procedure in CRAC2 was expanded by dividing each rain bin into four bins that depend on rainfall rate. Use of the modified bin structure in conjunction with the CRAC2 wet-deposition model changed all predicted health impacts by less than a factor of 2. 9 references.
NASA Astrophysics Data System (ADS)
Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk
2010-05-01
Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For
NASA Astrophysics Data System (ADS)
Challinor, A. J.
2010-12-01
Recent progress in assessing the impacts of climate variability and change on crops using multiple regional-scale simulations of crop and climate (i.e. ensembles) is presented. Simulations for India and China used perturbed responses to elevated carbon dioxide constrained using observations from FACE studies and controlled environments. Simulations with crop parameter sets representing existing and potential future adapted varieties were also carried out. The results for India are compared to sensitivity tests on two other crop models. For China, a parallel approach used socio-economic data to account for autonomous farmer adaptation. Results for the USA analysed cardinal temperatures under a range of local warming scenarios for 2711 varieties of spring wheat. The results are as follows: 1. Quantifying and reducing uncertainty. The relative contribution of uncertainty in crop and climate simulation to the total uncertainty in projected yield changes is examined. The observational constraints from FACE and controlled environment studies are shown to be the likely critical factor in maintaining relatively low crop parameter uncertainty. Without these constraints, crop simulation uncertainty in a doubled CO2 environment would likely be greater than uncertainty in simulating climate. However, consensus across crop models in India varied across different biophysical processes. 2. The response of yield to changes in local mean temperature was examined and compared to that found in the literature. No consistent response to temperature change was found across studies. 3. Implications for adaptation. China. The simulations of spring wheat in China show the relative importance of tolerance to water and heat stress in avoiding future crop failures. The greatest potential for reducing the number of harvests less than one standard deviation below the baseline mean yield value comes from alleviating water stress; the greatest potential for reducing harvests less than two
ERIC Educational Resources Information Center
Carr, Richard; McLean, Doug
1995-01-01
Discusses how educational-facility maintenance departments can cut costs in floor cleaning through careful evaluation of floor equipment and products. Tips for choosing carpet detergents are highlighted. (GR)
Tanner, Jean Paul; Salemi, Jason L; Stuart, Amy L; Yu, Haofei; Jordan, Melissa M; DuClos, Chris; Cavicchia, Philip; Correia, Jane A; Watkins, Sharon M; Kirby, Russell S
2016-05-01
We investigate uncertainty in estimates of pregnant women's exposure to ambient PM2.5 and benzene derived from central-site monitoring data. Through a study of live births in Florida during 2000-2009, we discuss the selection of spatial and temporal scales of analysis, limiting distances, and aggregation method. We estimate exposure concentrations and classify exposure for a range of alternatives, and compare impacts. Estimated exposure concentrations were most sensitive to the temporal scale of analysis for PM2.5, with similar sensitivity to spatial scale for benzene. Using 1-12 versus 3-8 weeks of gestational age as the exposure window resulted in reclassification of exposure by at least one quartile for up to 37% of mothers for PM2.5 and 27% for benzene. The largest mean absolute differences in concentration resulting from any decision were 0.78 µg/m(3) and 0.44 ppbC, respectively. No bias toward systematically higher or lower estimates was found between choices for any decision. PMID:27246278
Davidson, J.W.; Dudziak, D.J.; Higgs, C.E.; Stepanek, J.
1988-01-01
AARE, a code package to perform Advanced Analysis for Reactor Engineering, is a linked modular system for fission reactor core and shielding, as well as fusion blanket, analysis. Its cross-section sensitivity and uncertainty path presently includes the cross-section processing and reformatting code TRAMIX, cross-section homogenization and library reformatting code MIXIT, the 1-dimensional transport code ONEDANT, the 2-dimensional transport code TRISM, and the 1- and 2- dimensional cross-section sensitivity and uncertainty code SENSIBL. IN the present work, a short description of the whole AARE system is given, followed by a detailed description of the cross-section sensitivity and uncertainty path. 23 refs., 2 figs.
FIRST FLOOR FRONT ROOM. SECOND FLOOR HAS BEEN REMOVED NOTE ...
FIRST FLOOR FRONT ROOM. SECOND FLOOR HAS BEEN REMOVED-- NOTE PRESENCE OF SECOND FLOOR WINDOWS (THE LATTER FLOOR WAS REMOVED MANY YEARS AGO), See also PA-1436 B-12 - Kid-Physick House, 325 Walnut Street, Philadelphia, Philadelphia County, PA
Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.
2011-12-01
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.
NASA Astrophysics Data System (ADS)
Wang, H.; Rasch, P. J.; Easter, R. C.; Singh, B.; Qian, Y.; Ma, P.; Zhang, R.
2013-12-01
, export to emission ratio) of CA emitted from a number of predefined source regions/sectors, establish quantitative aerosol source-receptor relationships, and characterize source-to-receptor transport pathways. We can quantify the sensitivity of atmospheric CA concentrations and surface deposition in receptor regions of interest (including but not limited to the Arctic) to uncertainties in emissions of particular sources without actually perturbing the emissions, which is required by some other strategies for determining source-receptor relationships. Our study shows that the Arctic BC is much more sensitive to high-latitude local emissions than to mid-latitude major source contributors. For example, the same amount of BC emission from East Asia, which contributes about 20% to the annual mean BC loading in the Arctic, is 40 times less efficient than from the local sources to increase the Arctic BC. This indicates that the local BC sources (e.g., fires, metal smelting and gas flaring), which are highly uncertain or even missing from popular emission inventories, at least partly explain the historical under-prediction of Arctic BC in many climate models. The established source-receptor relationships will be used to assess potential climate impacts of the emission uncertainties.
Bartine, D.E.; Cacuci, D.G.
1983-09-13
This paper describes sources of uncertainty in the data used for calculating dose estimates for the Hiroshima explosion and details a methodology for systematically obtaining best estimates and reduced uncertainties for the radiation doses received. (ACR)
LeBouthillier, Daniel M; Asmundson, Gordon J G
2015-01-01
Several mechanisms have been posited for the anxiolytic effects of exercise, including reductions in anxiety sensitivity through interoceptive exposure. Studies on aerobic exercise lend support to this hypothesis; however, research investigating aerobic exercise in comparison to placebo, the dose-response relationship between aerobic exercise anxiety sensitivity, the efficacy of aerobic exercise on the spectrum of anxiety sensitivity and the effect of aerobic exercise on other related constructs (e.g. intolerance of uncertainty, distress tolerance) is lacking. We explored reductions in anxiety sensitivity and related constructs following a single session of exercise in a community sample using a randomized controlled trial design. Forty-one participants completed 30 min of aerobic exercise or a placebo stretching control. Anxiety sensitivity, intolerance of uncertainty and distress tolerance were measured at baseline, post-intervention and 3-day and 7-day follow-ups. Individuals in the aerobic exercise group, but not the control group, experienced significant reductions with moderate effect sizes in all dimensions of anxiety sensitivity. Intolerance of uncertainty and distress tolerance remained unchanged in both groups. Our trial supports the efficacy of aerobic exercise in uniquely reducing anxiety sensitivity in individuals with varying levels of the trait and highlights the importance of empirically validating the use of aerobic exercise to address specific mental health vulnerabilities. Aerobic exercise may have potential as a temporary substitute for psychotherapy aimed at reducing anxiety-related psychopathology. PMID:25874370
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Helton, J.C.; Bean, J.E.; Butcher, B.M.; Garner, J.W.; Vaughn, P.; Schreiber, J.D.; Swift, P.N.
1993-08-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability.
Rearden, B.T.; Anderson, W.J.; Harms, G.A.
2005-08-15
Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.
Kiedrowski, Brian C.
2012-06-19
Within the last decade, there has been increasing interest in the calculation of cross section sensitivity coefficients of k{sub eff} for integral experiment design and uncertainty analysis. The OECD/NEA has an Expert Group devoted to Sensitivity and Uncertainty Analysis within the Working Party for Nuclear Criticality Safety. This expert group has developed benchmarks to assess code capabilities and performance for doing sensitivity and uncertainty analysis. Phase III of a set of sensitivity benchmarks evaluates capabilities for computing sensitivity coefficients. MCNP6 has the capability to compute cross section sensitivities for k{sub eff} using continuous-energy physics. To help verify this capability, results for the Phase III benchmark cases are generated and submitted to the Expert Group for comparison. The Phase III benchmark has three cases: III.1, an array of MOX fuel pins, III.2, a series of infinite lattices of MOX fuel pins with varying pitches, and III.3 two spheres with homogeneous mixtures of UF{sub 4} and polyethylene with different enrichments.
A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two part paper. This Part 2 paper discusses sensitivity and uncertainty analyses conducted to assess the key m...
NASA Astrophysics Data System (ADS)
Le Cozannet, Gonéri; Oliveros, Carlos; Castelle, Bruno; Garcin, Manuel; Idier, Déborah; Pedreros, Rodrigo; Rohmer, Jeremy
2016-04-01
Future sandy shoreline changes are often assed by summing the contributions of longshore and cross-shore effects. In such approaches, a contribution of sea-level rise can be incorporated by adding a supplementary term based on the Bruun rule. Here, our objective is to identify where and when the use of the Bruun rule can be (in)validated, in the case of wave-exposed beaches with gentle slopes. We first provide shoreline change scenarios that account for all uncertain hydrosedimentary processes affecting the idealized low- and high-energy coasts described by Stive (2004)[Stive, M. J. F. 2004, How important is global warming for coastal erosion? an editorial comment, Climatic Change, vol. 64, n 12, doi:10.1023/B:CLIM.0000024785.91858. ISSN 0165-0009]. Then, we generate shoreline change scenarios based on probabilistic sea-level rise projections based on IPCC. For scenario RCP 6.0 and 8.5 and in the absence of coastal defenses, the model predicts an observable shift toward generalized beach erosion by the middle of the 21st century. On the contrary, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. To get insight into the relative importance of each source of uncertainties, we quantify each contributions to the variance of the model outcome using a global sensitivity analysis. This analysis shows that by the end of the 21st century, a large part of shoreline change uncertainties are due to the climate change scenario if all anthropogenic greenhousegas emission scenarios are considered equiprobable. To conclude, the analysis shows that under the assumptions above, (in)validating the Bruun rule should be straightforward during the second half of the 21st century and for the RCP 8.5 scenario. Conversely, for RCP 2.6, the noise in shoreline change evolution should continue dominating the signal due to the Bruun effect. This last conclusion can be interpreted as an important potential benefit of climate change mitigation.
Oita, M; Uto, Y; Hori, H; Tominaga, M; Sasaki, M
2014-06-01
Purpose: The aim of this study was to evaluate the distribution of uncertainty of cell survival by radiation, and assesses the usefulness of stochastic biological model applying for gaussian distribution. Methods: For single cell experiments, exponentially growing cells were harvested from the standard cell culture dishes by trypsinization, and suspended in test tubes containing 1 ml of MEM(2x10{sup 6} cells/ml). The hypoxic cultures were treated with 95% N{sub 2}−5% CO{sub 2} gas for 30 minutes. In vitro radiosensitization was also measured in EMT6/KU single cells to add radiosensitizer under hypoxic conditions. X-ray irradiation was carried out by using an Xray unit (Hitachi X-ray unit, model MBR-1505R3) with 0.5 mm Al/1.0 mm Cu filter, 150 kV, 4 Gy/min). In vitro assay, cells on the dish were irradiated with 1 Gy to 24 Gy, respectively. After irradiation, colony formation assays were performed. Variations of biological parameters were investigated at standard cell culture(n=16), hypoxic cell culture(n=45) and hypoxic cell culture(n=21) with radiosensitizers, respectively. The data were obtained by separate schedule to take account for the variation of radiation sensitivity of cell cycle. Results: At standard cell culture, hypoxic cell culture and hypoxic cell culture with radiosensitizers, median and standard deviation of alpha/beta ratio were 37.1±73.4 Gy, 9.8±23.7 Gy, 20.7±21.9 Gy, respectively. Average and standard deviation of D{sub 50} were 2.5±2.5 Gy, 6.1±2.2 Gy, 3.6±1.3 Gy, respectively. Conclusion: In this study, we have challenged to apply these uncertainties of parameters for the biological model. The variation of alpha values, beta values, D{sub 50} as well as cell culture might have highly affected by probability of cell death. Further research is in progress for precise prediction of the cell death as well as tumor control probability for treatment planning.
ERIC Educational Resources Information Center
Post Office Dept., Washington, DC.
Guidelines, methods and policies regarding the care and maintenance of post office building floors are overviewed in this handbook. Procedures outlined are concerned with maintaining a required level of appearance without wasting manpower. Flooring types and characteristics and the particular cleaning requirements of each type are given along with…
ERIC Educational Resources Information Center
McGrath, John
2012-01-01
With all of the hype that green building is receiving throughout the school facility-management industry, it's easy to overlook some elements that may not be right in front of a building manager's nose. It is helpful to examine the role floor covering plays in a green building project. Flooring is one of the most significant and important systems…
Maximizing Hard Floor Maintenance.
ERIC Educational Resources Information Center
Steger, Michael
2000-01-01
Explains the maintenance options available for hardwood flooring that can help ensure long life cycles and provide inviting spaces. Developing a maintenance system, knowing the type of traffic that the floor must endure, using entrance matting, and adhering to manufacturers guidelines are discussed. Daily, monthly or quarterly, and long-term…
FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED NOTE ...
FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED-- NOTE PRESENCE OF SECOND FLOOR WINDOWS AT LEFT. See also PA-1436 B-6 - Kid-Physick House, 325 Walnut Street, Philadelphia, Philadelphia County, PA
FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED NOTE ...
FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED-- NOTE PRESENCE OF SECOND FLOOR WINDOWS AT LEFT. See also PA-1436 B-13 - Kid-Physick House, 325 Walnut Street, Philadelphia, Philadelphia County, PA
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.; Hunke, E. C.
2015-12-01
Sea ice and climate models are key to understand and predict ongoing changes in the Arctic climate system, particularly sharp reductions in sea ice area and volume. There are, however, uncertainties arising from multiple sources, including parametric uncertainty, which affect model output. The Los Alamos Sea Ice Model (CICE) includes complex parameterizations of sea ice processes with a large number of parameters for which accurate values are still not well established. To enhance the credibility of sea ice predictions, it is necessary to understand the sensitivity of model results to uncertainties in input parameters. In this work we conduct a variance-based global sensitivity analysis of sea ice extent, area, and volume. This approach allows full exploration of our 40-dimensional parametric space, and the model sensitivity is quantified in terms of main and total effects indices. The global sensitivity analysis does not require assumptions of additivity or linearity, implicit in the most commonly used one-at-a-time sensitivity analyses. A Gaussian process emulator of the sea ice model is built and then used to generate the large number of samples necessary to calculate the sensitivity indices, at a much lower computational cost than using the full model. The sensitivity indices are used to rank the most important model parameters affecting Arctic sea ice extent, area, and volume. The most important parameters contributing to the model variance include snow conductivity and grain size, and the time-scale for drainage of melt ponds. Other important parameters include the thickness of the ice radiative scattering layer, ice density, and the ice-ocean drag coefficient. We discuss physical processes that explain variations in simulated sea ice variables in terms of the first order parameter effects and the most important interactions among them.
NASA Astrophysics Data System (ADS)
Román, Roberto; Bilbao, Julia; de Miguel, Argimiro; Pérez-Burgos, Ana
2014-05-01
The radiative transfer models can be used to obtain solar radiative quantities in the Earth surface as the erythemal ultraviolet (UVER) irradiance, which is the spectral irradiance weighted with the erythemal (sunburn) action spectrum, and the total shortwave irradiance (SW; 305-2,8000 nm). Aerosol and atmospheric properties are necessary as inputs in the model in order to calculate the UVER and SW irradiances under cloudless conditions, however the uncertainty in these inputs causes another uncertainty in the simulations. The objective of this work is to quantify the uncertainty in UVER and SW simulations generated by the aerosol optical depth (AOD) uncertainty. The data from different satellite retrievals were downloaded at nine Spanish places located in the Iberian Peninsula: Total ozone column from different databases, spectral surface albedo and water vapour column from MODIS instrument, AOD at 443 nm and Angström Exponent (between 443 nm and 670 nm) from MISR instrument onboard Terra satellite, single scattering albedo from OMI instrument onboard Aura satellite. The obtained AOD at 443 nm data from MISR were compared with AERONET measurements in six Spanish sites finding an uncertainty in the AOD from MISR of 0.074. In this work the radiative transfer model UVSPEC/Libradtran (1.7 version) was used to obtain the SW and UVER irradiance under cloudless conditions for each month and for different solar zenith angles (SZA) in the nine mentioned locations. The inputs used for these simulations were monthly climatology tables obtained with the available data in each location. Once obtained the UVER and SW simulations, they were repeated twice but changing the AOD monthly values by the same AOD plus/minus its uncertainty. The maximum difference between the irradiance run with AOD and the irradiance run with AOD plus/minus its uncertainty was calculated for each month, SZA, and location. This difference was considered as the uncertainty on the model caused by the AOD
Hartini, Entin Andiwijayakusuma, Dinan
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Additional information about the 1992 PA is provided in other volumes. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions, the choice of parameters selected for sampling, and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect compliance with 40 CFR 191B are: drilling intensity, intrusion borehole permeability, halite and anhydrite permeabilities, radionuclide solubilities and distribution coefficients, fracture spacing in the Culebra Dolomite Member of the Rustler Formation, porosity of the Culebra, and spatial variability of Culebra transmissivity. Performance with respect to 40 CFR 191B is insensitive to uncertainty in other parameters; however, additional data are needed to confirm that reality lies within the assigned distributions.
Pelvic floor muscle training exercises
Pelvic floor muscle training exercises are a series of exercises designed to strengthen the muscles of the pelvic floor. ... Pelvic floor muscle training exercises are recommended for: Women ... Men with urinary stress incontinence after prostate surgery ...
Kaplan, P.G.
1993-01-01
Yucca Mountain, Nevada is a potential site for a high-level radioactive-waste repository. Uncertainty and sensitivity analyses were performed to estimate critical factors in the performance of the site with respect to a criterion in terms of pre-waste-emplacement ground-water travel time. The degree of failure in the analytical model to meet the criterion is sensitive to the estimate of fracture porosity in the upper welded unit of the problem domain. Fracture porosity is derived from a number of more fundamental measurements including fracture frequency, fracture orientation, and the moisture-retention characteristic inferred for the fracture domain.
5. STAIR FROM SECOND FLOOR TO THIRD FLOOR, FROM NORTHWEST. ...
5. STAIR FROM SECOND FLOOR TO THIRD FLOOR, FROM NORTHWEST. Note extreme thin construction of support and outline of well ellipse on floor. Stair to first floor has been removed - Saltus-Habersham House, 802 Bay Street, Beaufort, Beaufort County, SC
Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio
2016-11-01
The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. PMID:27432731
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.
Wu, Y.; Liu, S.
2012-01-01
Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty
Uber, J.G.; Kao, J.J.; Brill, E.D.; Pfeffer, J.T.
1988-01-01
One important problem with using mathematical models is that parameter values, and thus the model results, are often uncertain. A general approach, Sensitivity Constrained Nonlinear Programming (SCNLP), was developed for extending nonlinear optimization models to include functions that depend on the system sensitivity to changes in parameter values. Such sensitivity-based functions include first-order measures of variance, reliability, and robustness. Thus SCNLP can be used to generate solutions or designs that are good with respect to modeled objectives, and that also reflect concerns about uncertainty in parameter values. A solution procedure and an implementation based on an existing nonlinear-programming code are presented. SCNLP was applied to a complex activated sludge waste-water treatment plant design problem.
Forest floor vegetation response to nitrogen deposition in Europe.
Dirnböck, Thomas; Grandin, Ulf; Bernhardt-Römermann, Markus; Beudert, Burkhardt; Canullo, Roberto; Forsius, Martin; Grabner, Maria-Theresia; Holmberg, Maria; Kleemola, Sirpa; Lundin, Lars; Mirtl, Michael; Neumann, Markus; Pompei, Enrico; Salemaa, Maija; Starlinger, Franz; Staszewski, Tomasz; Uziębło, Aldona Katarzyna
2014-02-01
Chronic nitrogen (N) deposition is a threat to biodiversity that results from the eutrophication of ecosystems. We studied long-term monitoring data from 28 forest sites with a total of 1,335 permanent forest floor vegetation plots from northern Fennoscandia to southern Italy to analyse temporal trends in vascular plant species cover and diversity. We found that the cover of plant species which prefer nutrient-poor soils (oligotrophic species) decreased the more the measured N deposition exceeded the empirical critical load (CL) for eutrophication effects (P = 0.002). Although species preferring nutrient-rich sites (eutrophic species) did not experience a significantly increase in cover (P = 0.440), in comparison to oligotrophic species they had a marginally higher proportion among new occurring species (P = 0.091). The observed gradual replacement of oligotrophic species by eutrophic species as a response to N deposition seems to be a general pattern, as it was consistent on the European scale. Contrary to species cover changes, neither the decrease in species richness nor of homogeneity correlated with nitrogen CL exceedance (ExCLemp N). We assume that the lack of diversity changes resulted from the restricted time period of our observations. Although existing habitat-specific empirical CL still hold some uncertainty, we exemplify that they are useful indicators for the sensitivity of forest floor vegetation to N deposition. PMID:24132996
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Dieye, A.M.; Roy, D.P.; Hanan, N.P.; Liu, S.; Hansen, M.; Toure, A.
2011-01-01
Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs. ?? 2011 Author(s).
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.; GARNER,J.W.; MACKINNON,ROBERT J.; MILLER,JOEL D.; SCHREIBER,JAMES D.; VAUGHN,PALMER
2000-05-19
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to which the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.
NASA Astrophysics Data System (ADS)
Pepin, G.
2009-04-01
quantify the dispersion of results (time, maximum...); this part deals with uncertainty analysis, and (ii) to identify relevant models and input data whose uncertainty manages uncertainty of the results; this part deals with sensitivity analysis. First this paper describes Andra's methodology and numerical tool used. Then it presents results applied to Monte-Carlo probabilistic multi-parametric study on HLW (vitrified waste) disposal, in order to study propagation of uncertainties of input data (Callovo-Oxfordian, EDZ (Excavated Damaged Zone), and Engineering components) on various radionuclide pathways within the disposal. The methodology consists of (i) setting up probabilistic distribution function (pdf), according to the level of knowledge, (ii) sampling all pdf with Latin Hypercube Sampling methods, (iii) ensuring physical coherence in sets of input data, using correlations and constraints, (iv) using integrated computing tool (Alliances platform) to perform calculations. Results focus on: - uncertainty analysis: multi-parametric study shows (i) that transfer through undisturbed argillites remains the main pathway, (ii) a large dispersion (several orders of magnitude) of molar rate at the top of clay layer for the two pathways (undisturbed argillites, and repository structures), which includes reference point of altered scenario, such as seal failure one, and which is close to worst case one. - sensitivity analysis: for undisturbed argillites pathway, calculations highlight that uncertainty on some input data such as adsorption of Iodine, solubility limit of Selenium, diffusion and vertical permeability of undisturbed argillites, manages dispersion of the results. For repository structures pathway, uncertainty on hydraulic properties, such as permeabilities of EDZ, are relevant. This study is important to identify knowledge of parameters which has to be increased in order to reduce dispersion (uncertainty) of each performance assessment indicator. Lessons learnt lead
NASA Astrophysics Data System (ADS)
Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
NASA Astrophysics Data System (ADS)
Smyshlyaev, Sergei P.; Geller, Marvin A.; Yudin, Valery A.
1999-11-01
Lightning NOx production is one of the most important and most uncertain sources of reactive nitrogen in the atmosphere. To examine the role of NOx lightning production uncertainties in supersonic aircraft assessment studies, we have done a number of numerical calculations with the State University of New York at Stony Brook-Russian State Hydrometeorological Institute of Saint-Petersburg two-dimensional model. The amount of nitrogen oxides produced by lightning discharges was varied within its quoted uncertainty from 2 to 12 Tg N/yr. Different latitudinal, altitudinal, and seasonal distributions of lightning NOx production were considered. Results of these model calculations show that the assessment of supersonic aircraft impacts on the ozone layer is very sensitive to the strength of NOx production from lightning. The high-speed civil transport produced NOx leads to positive column ozone changes for lightning NOx production less than 4 Tg N/yr, and to total ozone decrease for lightning NOx production more than 5 Tg N/yr for the same NOx emission scenario. For large lightning production the ozone response is mostly decreasing with increasing emission index, while for low lightning production the ozone response is mostly increasing with increasing emission index. Uncertainties in the global lightning NOx production strength may lead to uncertainties in column ozone up to 4%. The uncertainties due to neglecting the seasonal variations of the lightning NOx production and its simplified latitude distribution are about 2 times less (1.5-2%). The type of altitude distribution for the lightning NOx production does not significally impact the column ozone, but is very important for the assessment studies of aircraft perturbations of atmospheric ozone. Increased global lightning NOx production causes increased total ozone, but for assessment of the column ozone response to supersonic aircraft emissions, the increase of lightning NOx production leads to column ozone
NASA Astrophysics Data System (ADS)
Jun, Lu; Hao, Ding; Hong, Zhang; Ce, Gao Dian
The present HVAC equipments for the residential buildings in the Hot-summer-and-Cold-winter climate region are still at a high energy consuming level. So that the high efficiency HVAC system is an urgently need for achieving the preset government energy saving goal. With its advantage of highly sanitary, highly comfortable and uniform of temperature field, the hot-water resource floor radiation heating system has been widely accepted. This paper has put forward a new way in air-conditioning, which combines the fresh-air supply unit and such floor radiation system for the dehumidification and cooling in summer or heating in winter. By analyze its advantages and limitations, we found that this so called Cooling/ Heating Floor AC System can improve the IAQ of residential building while keep high efficiency quality. We also recommend a methodology for the HVAC system designing, which will ensure the reduction of energy cost of users.
NASA Astrophysics Data System (ADS)
Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.
2010-09-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.
An International Workshop on Uncertainty, Sensitivity, and Parameter Estimation for Multimedia Environmental Modeling was held August 1921, 2003, at the U.S. Nuclear Regulatory Commission Headquarters in Rockville, Maryland, USA. The workshop was organized and convened by the Fe...
Direct Aerosol Forcing Uncertainty
Mccomiskey, Allison
2008-01-15
Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.
NASA Astrophysics Data System (ADS)
Masterlark, T.; Donovan, T. C.; Feigl, K. L.; Haney, M. M.; Thurber, C. H.
2013-12-01
Forward models of volcano deformation, due to a pressurized magma chamber embedded in an elastic domain, can predict observed surface deformation. Inverse models of surface deformation allow us to estimate characteristic parameters that describe the deformation source, such as the position and strength of a pressurized magma chamber embedded in an elastic domain. However, the specific distribution of material properties controls how the pressurization translates to surface deformation in a forward model, or alternatively, how observed surface deformation translates to source parameters in an inverse model. Seismic tomography models can describe the specific distributions of material properties that are necessary for accurate forward and inverse models of volcano deformation. The aim of this project is to investigate how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. To do so, we combine FEM-based nonlinear inverse analyses of InSAR data for Okmok volcano, Alaska, as an example to estimate sensitivities of source parameters to uncertainties in seismic tomography. More specifically, we use Monte Carlo methods to construct an assembly of FEMs that simulate a pressurized magma chamber in the domain of Okmok. Each FEM simulates a realization of source parameters (three-component magma chamber position), a material property distribution that samples the seismic tomography model with a normal velocity perturbation of +/-10%, and a corresponding linear pressure estimate calculated using the Pinned Mesh Perturbation method. We then analyze the posteriori results to quantify sensitivities of source parameter estimates to the seismic tomography uncertainties. Preliminary results suggest that uncertainties in the seismic tomography do not significantly influence the estimated source parameters at a 95% confidence level. The presence of heterogeneous material properties
Pelvic Floor Disorders Network
... to develop and perform research studies related to women with pelvic floor disorders. In this way, studies can be done more quickly than if the medical centers were working alone. Doctors, nurses, other health care workers, and support staff all play important roles.The ...
Detail view of floor mosaic in first floor lobby ...
Detail view of floor mosaic in first floor lobby - St. Elizabeths Hospital, Hitchcock Hall, 2700 Martin Luther King Jr. Avenue, Southeast, 588-604 Redwood Street, Southeast, Washington, District of Columbia, DC
First floor lobby, showing stairs to ground floor Fitzsimons ...
First floor lobby, showing stairs to ground floor - Fitzsimons General Hospital, Main Hospital Building, Charlie Kelly Boulevard, North side, at intersection of Sharon A. Lane Drive, Aurora, Adams County, CO
Stairwell from first floor to ground floor Fitzsimons General ...
Stairwell from first floor to ground floor - Fitzsimons General Hospital, Main Hospital Building, Charlie Kelly Boulevard, North side, at intersection of Sharon A. Lane Drive, Aurora, Adams County, CO
Railing detail, stairs from first floor to ground floor ...
Railing detail, stairs from first floor to ground floor - Fitzsimons General Hospital, Main Hospital Building, Charlie Kelly Boulevard, North side, at intersection of Sharon A. Lane Drive, Aurora, Adams County, CO
Two and Three Bedroom Units: First Floor Plan, Second Floor ...
Two and Three Bedroom Units: First Floor Plan, Second Floor Plan, South Elevation (As Built), North Elevation (As Built), East Elevation (As Built), East Elevation (Existing), North Elevation (Existing) - Aluminum City Terrace, East Hill Drive, New Kensington, Westmoreland County, PA
16. SANDSORTING BUILDING, FIRST FLOOR, MEZZANINE ON LEFT (BELOW FLOOR ...
16. SAND-SORTING BUILDING, FIRST FLOOR, MEZZANINE ON LEFT (BELOW FLOOR ARE CONCRETE AND STORAGE BINS), LOOKING NORTH - Mill "C" Complex, Sand-Sorting Building, South of Dee Bennet Road, near Illinois River, Ottawa, La Salle County, IL
STIRLING'S QUARTERS SMALL BARN: FIRST FLOOR PLAN; SECOND FLOOR PLAN; ...
STIRLING'S QUARTERS SMALL BARN: FIRST FLOOR PLAN; SECOND FLOOR PLAN; SOUTH ELEVATION; EAST ELEVATION; NORTH ELEVATION; WEST ELEVATION. - Stirling's Quarters, 555 Yellow Springs Road, Tredyffrin Township, Valley Forge, Chester County, PA
Ciffroy, P; Alfonso, B; Altenpohl, A; Banjac, Z; Bierkens, J; Brochot, C; Critto, A; De Wilde, T; Fait, G; Fierens, T; Garratt, J; Giubilato, E; Grange, E; Johansson, E; Radomyski, A; Reschwann, K; Suciu, N; Tanaka, T; Tediosi, A; Van Holderbeke, M; Verdonck, F
2016-10-15
MERLIN-Expo is a library of models that was developed in the frame of the FP7 EU project 4FUN in order to provide an integrated assessment tool for state-of-the-art exposure assessment for environment, biota and humans, allowing the detection of scientific uncertainties at each step of the exposure process. This paper describes the main features of the MERLIN-Expo tool. The main challenges in exposure modelling that MERLIN-Expo has tackled are: (i) the integration of multimedia (MM) models simulating the fate of chemicals in environmental media, and of physiologically based pharmacokinetic (PBPK) models simulating the fate of chemicals in human body. MERLIN-Expo thus allows the determination of internal effective chemical concentrations; (ii) the incorporation of a set of functionalities for uncertainty/sensitivity analysis, from screening to variance-based approaches. The availability of such tools for uncertainty and sensitivity analysis aimed to facilitate the incorporation of such issues in future decision making; (iii) the integration of human and wildlife biota targets with common fate modelling in the environment. MERLIN-Expo is composed of a library of fate models dedicated to non biological receptor media (surface waters, soils, outdoor air), biological media of concern for humans (several cultivated crops, mammals, milk, fish), as well as wildlife biota (primary producers in rivers, invertebrates, fish) and humans. These models can be linked together to create flexible scenarios relevant for both human and wildlife biota exposure. Standardized documentation for each model and training material were prepared to support an accurate use of the tool by end-users. One of the objectives of the 4FUN project was also to increase the confidence in the applicability of the MERLIN-Expo tool through targeted realistic case studies. In particular, we aimed at demonstrating the feasibility of building complex realistic exposure scenarios and the accuracy of the
CAST FLOOR WITH VIEW OF TORPEDO LADLE (BENEATH CAST FLOOR) ...
CAST FLOOR WITH VIEW OF TORPEDO LADLE (BENEATH CAST FLOOR) AND KEEPERS OF THE CAST HOUSE FLOOR, S.L. KIMBROUGH AND DAVID HOLMES. - U.S. Steel, Fairfield Works, Blast Furnace No. 8, North of Valley Road, West of Ensley-Pleasant Grove Road, Fairfield, Jefferson County, AL
49. TOP FLOOR OF 1852 WING LOOKING EAST. FLOOR COVERING ...
49. TOP FLOOR OF 1852 WING LOOKING EAST. FLOOR COVERING INDICATES ANGLE OF INTERSECTION BETWEEN THIS AND THE EARLIER WING. NOTE ALSO CHANGE IN ORIENTATION OF COLUMNS AND HANGING LIGHT FIXTURE. BRIGHT AREA AT CEILING IN MIDDLE DISTANCE INDICATES SKYLIGHT. THIS FLOOR ADDED CA. 1880. - Boston Manufacturing Company, 144-190 Moody Street, Waltham, Middlesex County, MA
Interior, view of third floor room, with original flooring, camera ...
Interior, view of third floor room, with original flooring, camera facing northwest, this was typical of the rooms on the third floor before the rehabilitation by the Navy in the 1950s and 1960s - Naval Training Station, Senior Officers' Quarters District, Quarters No. 4, Naval Station Treasure Island, 4 Whiting Way, Yerba Buena Island, San Francisco, San Francisco County, CA
NASA Astrophysics Data System (ADS)
Tessler, Z. D.; Vorosmarty, C. J.; Cohen, S.; Tang, H.
2014-12-01
A loose-coupling of a basin-scale hydrological and sediment flux model with acoastal ocean hydrodynamics model is used to assess the importance ofuncertainties in river mouth locations and fluxes on coastal geomorphology ofthe Mekong river delta. At the land-ocean interface, river deltas mediate theflux of water, sediment, and nutrients from the basin watershed, though thecomplex delta river network, and into the coastal ocean. In the Mekong riverdelta, irrigation networks and surface water storage for rice cultivationredistribute, in space and time, water and sediment fluxes along the coastline.Distribution of fluxes through the delta is important for accurate assessment ofdelta land aggregation, coastline migration, and coastal ocean biogeochemistry.Using a basin-scale hydrological model, WBMsed, interfaced with a coastalhydrodynamics/wave/sediment model, COAWST, we investigate freshwater andsediment plumes and morphological changes to the subaqueous delta front. Thereis considerable uncertainty regarding how the delta spatially filters water andsediment fluxes as they transit through the river and irrigation network. Byadjusting the placement and relative distribution of WBMsed discharge along thecoast, we estimate the resulting bounds on sediment plume structure, timing, andmorphological deposition patterns. The coastal ocean model is validated bycomparing simulated plume structure and seasonality to MERIS and MODIS derivedestimates of surface turbidity. We find good agreement with regards to plumeextent and timing, with plumes weakest in the early spring, extending stronglyto the west in the fall, and toward the east in winter. Uncertainty regardingriver outflow distribution along the coastline leads to substantial uncertaintyin rates of morphological change, particularly away from the main Mekong Riverdistributary channels.
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-10-01
Existing methods for sensitivity analysis are described and new techniques are proposed. These techniques are evaluated through consideration relative to the QUASAR program. Merits and limitations of the various approaches are examined by a detailed application to the Suppression Pool Aerosol Removal Code (SPARC). 17 refs., 7 figs., 12 tabs.
NASA Astrophysics Data System (ADS)
Tavakoli, S.; Poslad, S.; Fruhwirth, R.; Winter, M.
2012-04-01
This paper introduces an application of a novel EventTracker platform for instantaneous Sensitivity Analysis (SA) of large scale real-time geo-information. Earth disaster management systems demand high quality information to aid a quick and timely response to their evolving environments. The idea behind the proposed EventTracker platform is the assumption that modern information management systems are able to capture data in real-time and have the technological flexibility to adjust their services to work with specific sources of data/information. However, to assure this adaptation in real time, the online data should be collected, interpreted, and translated into corrective actions in a concise and timely manner. This can hardly be handled by existing sensitivity analysis methods because they rely on historical data and lazy processing algorithms. In event-driven systems, the effect of system inputs on its state is of value, as events could cause this state to change. This 'event triggering' situation underpins the logic of the proposed approach. Event tracking sensitivity analysis method describes the system variables and states as a collection of events. The higher the occurrence of an input variable during the trigger of event, the greater its potential impact will be on the final analysis of the system state. Experiments were designed to compare the proposed event tracking sensitivity analysis with existing Entropy-based sensitivity analysis methods. The results have shown a 10% improvement in a computational efficiency with no compromise for accuracy. It has also shown that the computational time to perform the sensitivity analysis is 0.5% of the time required compared to using the Entropy-based method. The proposed method has been applied to real world data in the context of preventing emerging crises at drilling rigs. One of the major purposes of such rigs is to drill boreholes to explore oil or gas reservoirs with the final scope of recovering the content
Chronic pelvic floor dysfunction.
Hartmann, Dee; Sarton, Julie
2014-10-01
The successful treatment of women with vestibulodynia and its associated chronic pelvic floor dysfunctions requires interventions that address a broad field of possible pain contributors. Pelvic floor muscle hypertonicity was implicated in the mid-1990s as a trigger of major chronic vulvar pain. Painful bladder syndrome, irritable bowel syndrome, fibromyalgia, and temporomandibular jaw disorder are known common comorbidities that can cause a host of associated muscular, visceral, bony, and fascial dysfunctions. It appears that normalizing all of those disorders plays a pivotal role in reducing complaints of chronic vulvar pain and sexual dysfunction. Though the studies have yet to prove a specific protocol, physical therapists trained in pelvic dysfunction are reporting success with restoring tissue normalcy and reducing vulvar and sexual pain. A review of pelvic anatomy and common findings are presented along with suggested physical therapy management. PMID:25108498
Shapiro, Martin S; Schuck-Paim, Cynthia; Kacelnik, Alex
2012-02-01
Observations that humans and other species are sensitive to variability in the outcome of their choices has led to the widespread assumption that this sensitivity reflects adaptations to cope with risk (stochasticity of action consequences). We question this assumption in experiments with starlings. We show that choices between outcomes that are risky in both amount and delay to food are predictable from preferences in the absence of risk. We find that the overarching best predictor of an option's value is the average of the ratios of amount to delay across its (frequency weighted) outcomes, an expression known as "Expectation of the Ratios", or EoR. Most tests of risk sensitivity focus on the predicted impact of energetic state on preference for risk. We show instead that under controlled state conditions subjects are variance- and risk-neutral with respect to EoR, and this implies variance neutrality for amounts and variance-proneness for delays. The weak risk aversion for amounts often reported requires a small modification of EoR. EoR is consistent with associative learning: acquisition of value for initially neutral stimuli is roughly proportional to the magnitude of their consequences and inversely proportional to the interval between the stimulus and its consequence's onset. If, as is likely, the effect of amount on acquisition is sublinear, the result is a deviation from EoR towards risk aversion for amount. In 3 experiments, we first establish individual birds' preferences between pairs of fixed options that differ in both amount and delay (small-sooner vs. large-later), and then examine choices between stochastic mixtures that include these options. Experiment 1 uses a titration to establish certainty equivalents, while experiments 2 and 3 measure degree of preference between options with static parameters. The mixtures differ in the coefficient of variation of amount, delay, or both, but EoR is sufficient to predict all results, with no additional
NASA Astrophysics Data System (ADS)
Barani, Simone; Spallarossa, Daniele; Bazzurro, Paolo; Eva, Claudio
2007-05-01
The use of logic trees in probabilistic seismic hazard analyses often involves a large number of branches that reflect the uncertainty in the selection of different models and in the selection of the parameter values of each model. The sensitivity analysis, as proposed by Rabinowitz and Steinberg [Rabinowitz, N., Steinberg, D.M., 1991. Seismic hazard sensitivity analysis: a multi-parameter approach. Bull. Seismol. Soc. Am. 81, 796-817], is an efficient tool that allows the construction of logic trees focusing attention on the parameters that have greater impact on the hazard. In this paper the sensitivity analysis is performed in order to identify the parameters that have the largest influence on the Western Liguria (North Western Italy) seismic hazard. The analysis is conducted for six strategic sites following the multi-parameter approach developed by Rabinowitz and Steinberg [Rabinowitz, N., Steinberg, D.M., 1991. Seismic hazard sensitivity analysis: a multi-parameter approach. Bull. Seismol. Soc. Am. 81, 796-817] and accounts for both mean hazard values and hazard values corresponding to different percentiles (e.g., 16%-ile and 84%-ile). The results are assessed in terms of the expected PGA with a 10% probability of exceedance in 50 years for rock conditions and account for both the contribution from specific source zones using the Cornell approach [Cornell, C.A., 1968. Engineering seismic risk analysis. Bull. Seismol. Soc. Am. 58, 1583-1606] and the spatially smoothed seismicity [Frankel, A., 1995. Mapping seismic hazard in the Central and Eastern United States. Seismol. Res. Lett. 66, 8-21]. The influence of different procedures for calculating seismic hazard, seismic catalogues (epicentral parameters), source zone models, frequency-magnitude parameters, maximum earthquake magnitude values and attenuation relationships is considered. As a result, the sensitivity analysis allows us to identify the parameters with higher influence on the hazard. Only these
NASA Technical Reports Server (NTRS)
Thate, Robert
2012-01-01
The modular flooring system (MFS) was developed to provide a portable, modular, durable carpeting solution for NASA fs Robotics Alliance Project fs (RAP) outreach efforts. It was also designed to improve and replace a modular flooring system that was too heavy for safe use and transportation. The MFS was developed for use as the flooring for various robotics competitions that RAP utilizes to meet its mission goals. One of these competitions, the FIRST Robotics Competition (FRC), currently uses two massive rolls of broadloom carpet for the foundation of the arena in which the robots are contained during the competition. The area of the arena is approximately 30 by 72 ft (approximately 9 by 22 m). This carpet is very cumbersome and requires large-capacity vehicles, and handling equipment and personnel to transport and deploy. The broadloom carpet sustains severe abuse from the robots during a regular three-day competition, and as a result, the carpet is not used again for competition. Similarly, broadloom carpets used for trade shows at convention centers around the world are typically discarded after only one use. This innovation provides a green solution to this wasteful practice. Each of the flooring modules in the previous system weighed 44 lb (.20 kg). The improvements in the overall design of the system reduce the weight of each module by approximately 22 lb (.10 kg) (50 %), and utilize an improved "module-to-module" connection method that is superior to the previous system. The MFS comprises 4-by-4-ft (.1.2-by- 1.2-m) carpet module assemblies that utilize commercially available carpet tiles that are bonded to a lightweight substrate. The substrate surface opposite from the carpeted surface has a module-to-module connecting interface that allows for the modules to be connected, one to the other, as the modules are constructed. This connection is hidden underneath the modules, creating a smooth, co-planar flooring surface. The modules are stacked and strapped
Waker, Anthony; Taylor, Graeme
2014-10-01
The REM500 is a commercial instrument based on a tissue-equivalent proportional counter (TEPC) that has been successfully deployed as a hand-held neutron monitor, although its sensitivity is regarded by some workers as low for nuclear power plant radiation protection work. Improvements in sensitivity can be obtained using a multi-element proportional counter design in which a large number of small detecting cavities replace the single large volume cavity of conventional TEPCs. In this work, the authors quantify the improvement in uncertainty that can be obtained by comparing the ambient dose equivalent measured with a REM500, which utilises a 5.72 cm (2(1/4) inch) diameter Rossi counter, with that of a multi-element TEPC designed to have the sensitivity of a 12.7 cm (5 inch) spherical TEPC. The results obtained also provide some insight into the influence of other design features of TEPCs, such as geometry and gas filling, on the measurement of ambient dose equivalent. PMID:24711528
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.; GARNER,J.W.; MACKINNON,ROBERT J.; MILLER,JOEL D.; SCHREIBER,J.D.; VAUGHN,PALMER
2000-05-22
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequent to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.
Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.
2010-05-01
Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, an extensive performance assessment (PA) for the YM repository was completed in 2008 [1] and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository [2]. This presentation provides an overview of the conceptual and computational structure of the indicated PA (hereafter referred to as the 2008 YM PA) and the roles that uncertainty analysis and sensitivity analysis play in this structure.
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Floor-Fractured Craters through Machine Learning Methods
NASA Astrophysics Data System (ADS)
Thorey, C.
2015-12-01
Floor-fractured craters are impact craters that have undergone post impact deformations. They are characterized by shallow floors with a plate-like or convex appearance, wide floor moats, and radial, concentric, and polygonal floor-fractures. While the origin of these deformations has long been debated, it is now generally accepted that they are the result of the emplacement of shallow magmatic intrusions below their floor. These craters thus constitute an efficient tool to probe the importance of intrusive magmatism from the lunar surface. The most recent catalog of lunar-floor fractured craters references about 200 of them, mainly located around the lunar maria Herein, we will discuss the possibility of using machine learning algorithms to try to detect new floor-fractured craters on the Moon among the 60000 craters referenced in the most recent catalogs. In particular, we will use the gravity field provided by the Gravity Recovery and Interior Laboratory (GRAIL) mission, and the topographic dataset obtained from the Lunar Orbiter Laser Altimeter (LOLA) instrument to design a set of representative features for each crater. We will then discuss the possibility to design a binary supervised classifier, based on these features, to discriminate between the presence or absence of crater-centered intrusion below a specific crater. First predictions from different classifier in terms of their accuracy and uncertainty will be presented.
NASA Astrophysics Data System (ADS)
Thornton, Steven F.; Lerner, David N.; Banwart, Steven A.
2001-12-01
A quantitative methodology is described for the field-scale performance assessment of natural attenuation using plume-scale electron and carbon balances. This provides a practical framework for the calculation of global mass balances for contaminant plumes, using mass inputs from the plume source, background groundwater and plume residuals in a simplified box model. Biodegradation processes and reactions included in the analysis are identified from electron acceptors, electron donors and degradation products present in these inputs. Parameter values used in the model are obtained from data acquired during typical site investigation and groundwater monitoring studies for natural attenuation schemes. The approach is evaluated for a UK Permo-Triassic Sandstone aquifer contaminated with a plume of phenolic compounds. Uncertainty in the model predictions and sensitivity to parameter values was assessed by probabilistic modelling using Monte Carlo methods. Sensitivity analyses were compared for different input parameter probability distributions and a base case using fixed parameter values, using an identical conceptual model and data set. Results show that consumption of oxidants by biodegradation is approximately balanced by the production of CH 4 and total dissolved inorganic carbon (TDIC) which is conserved in the plume. Under this condition, either the plume electron or carbon balance can be used to determine contaminant mass loss, which is equivalent to only 4% of the estimated source term. This corresponds to a first order, plume-averaged, half-life of >800 years. The electron balance is particularly sensitive to uncertainty in the source term and dispersive inputs. Reliable historical information on contaminant spillages and detailed site investigation are necessary to accurately characterise the source term. The dispersive influx is sensitive to variability in the plume mixing zone width. Consumption of aqueous oxidants greatly exceeds that of mineral oxidants
Thornton, S F; Lerner, D N; Banwart, S A
2001-12-15
A quantitative methodology is described for the field-scale performance assessment of natural attenuation using plume-scale electron and carbon balances. This provides a practical framework for the calculation of global mass balances for contaminant plumes, using mass inputs from the plume source, background groundwater and plume residuals in a simplified box model. Biodegradation processes and reactions included in the analysis are identified from electron acceptors, electron donors and degradation products present in these inputs. Parameter values used in the model are obtained from data acquired during typical site investigation and groundwater monitoring studies for natural attenuation schemes. The approach is evaluated for a UK Permo-Triassic Sandstone aquifer contaminated with a plume of phenolic compounds. Uncertainty in the model predictions and sensitivity to parameter values was assessed by probabilistic modelling using Monte Carlo methods. Sensitivity analyses were compared for different input parameter probability distributions and a base case using fixed parameter values, using an identical conceptual model and data set. Results show that consumption of oxidants by biodegradation is approximately balanced by the production of CH4 and total dissolved inorganic carbon (TDIC) which is conserved in the plume. Under this condition, either the plume electron or carbon balance can be used to determine contaminant mass loss, which is equivalent to only 4% of the estimated source term. This corresponds to a first order, plume-averaged, half-life of > 800 years. The electron balance is particularly sensitive to uncertainty in the source term and dispersive inputs. Reliable historical information on contaminant spillages and detailed site investigation are necessary to accurately characterise the source term. The dispersive influx is sensitive to variability in the plume mixing zone width. Consumption of aqueous oxidants greatly exceeds that of mineral oxidants
9. LOOKING FROM FLOOR 1 UP THROUGH OPENING TO FLOOR ...
9. LOOKING FROM FLOOR 1 UP THROUGH OPENING TO FLOOR 2; OPENING IN THE FLOOR IS TO ALLOW THE RUNNER STONES TO BE FLIPPED OVER FOR SHARPENING; AT THE FIRST FLOOR ARE THE POSTS SUPPORTING THE BRIDGEBEAMS ON WHICH THE BRIDGE TREES PIVOT; THE CENTER POST RISES ABOVE THE STONES TO RECEIVE THE FOOT BEARING OF THE UPRIGHT SHAFT; ALSO SEEN ARE THE STONE SPINDLWS, UNDER SIDES OF THE BED STONES, STONE NUT AND GREAT SPUR WHEEL. - Pantigo Windmill, James Lane, East Hampton, Suffolk County, NY
[Pelvic floor muscle training and pelvic floor disorders in women].
Thubert, T; Bakker, E; Fritel, X
2015-05-01
Our goal is to provide an update on the results of pelvic floor rehabilitation in the treatment of urinary incontinence and genital prolapse symptoms. Pelvic floor muscle training allows a reduction of urinary incontinence symptoms. Pelvic floor muscle contractions supervised by a healthcare professional allow cure in half cases of stress urinary incontinence. Viewing this contraction through biofeedback improves outcomes, but this effect could also be due by a more intensive and prolonged program with the physiotherapist. The place of electrostimulation remains unclear. The results obtained with vaginal cones are similar to pelvic floor muscle training with or without biofeedback or electrostimulation. It is not known whether pelvic floor muscle training has an effect after one year. In case of stress urinary incontinence, supervised pelvic floor muscle training avoids surgery in half of the cases at 1-year follow-up. Pelvic floor muscle training is the first-line treatment of post-partum urinary incontinence. Its preventive effect is uncertain. Pelvic floor muscle training may reduce the symptoms associated with genital prolapse. In conclusion, pelvic floor rehabilitation supervised by a physiotherapist is an effective short-term treatment to reduce the symptoms of urinary incontinence or pelvic organ prolapse. PMID:25921509
NASA Astrophysics Data System (ADS)
Zhang, Wenxian; Trail, Marcus A.; Hu, Yongtao; Nenes, Athanasios; Russell, Armistead G.
2015-12-01
Regional air quality models are widely used to evaluate control strategy effectiveness. As such, it is important to understand the accuracy of model simulations to establish confidence in model performance and to guide further model development. Particulate matter with aerodynamic diameter less than 2.5 μm (PM2.5) is regulated as one of the criteria pollutants by the National Ambient Air Quality Standards (NAAQS), and PM2.5 concentrations have a complex dependence on the emissions of a number of precursors, including SO2, NOx, NH3, VOCs, and primary particulate matter (PM). This study quantifies how the emission-associated uncertainties affect modeled PM2.5 concentrations and sensitivities using a reduced-form approach. This approach is computationally efficient compared to the traditional Monte Carlo simulation. The reduced-form model represents the concentration-emission response and is constructed using first- and second-order sensitivities obtained from a single CMAQ/HDDM-PM simulation. A case study is conducted in the Houston-Galveston-Brazoria (HGB) area. The uncertainty of modeled, daily average PM2.5 concentrations due to uncertain emissions is estimated to fall between 42% and 52% for different simulated concentration levels, and the uncertainty is evenly distributed in the modeling domain. Emission-associated uncertainty can account for much of the difference between simulation and ground measurements as 60% of observed PM2.5 concentrations fall within the range of one standard deviation of corresponding simulated PM2.5 concentrations. Uncertainties in meteorological fields as well as the model representation of secondary organic aerosol formation are the other two key contributors to the uncertainty of modeled PM2.5. This study also investigates the uncertainties of the simulated first-order sensitivities, and found that the larger the first-order sensitivity, the lower its uncertainty associated with emissions. Sensitivity of PM2.5 to primary PM has
NASA Technical Reports Server (NTRS)
2002-01-01
(Released 30 May 2002) Juventae Chasma is an enormous box canyon (250 km X 100 km) which opens to the north and forms the outflow channel Maja Vallis. Most Martian outflow channels such as Maja, Kasei, and Ares Valles begin at point sources such as box canyons and chaotic terrain and then flow unconfined into a basin region. This image captures a portion of the western floor of Juventae Chasma and shows a wide variety of landforms. Conical hills, mesas, buttes and plateaus of layered material dominate this scene and seem to be 'swimming' in vast sand sheets. The conical hills have a spur and gully topography associated with them while the flat topped buttes and mesas do not. This may be indicative of different materials that compose each of these landforms or it could be that the flat-topped layer has been completely eroded off of the conical hills thereby exposing a different rock type. Both the conical hills and flat-topped buttes and mesas have extensive scree slopes (heaps of eroded rock and debris). Ripples, which are inferred to be dunes, can also be seen amongst the hills. No impact craters can be seen in this image, indicating that the erosion and transport of material down the canyon wall and across the floor is occurring at a relatively rapid rate, so that any craters that form are rapidly buried or eroded.
Tactile Imaging Markers to Characterize Female Pelvic Floor Conditions
van Raalte, Heather; Egorov, Vladimir
2015-01-01
The Vaginal Tactile Imager (VTI) records pressure patterns from vaginal walls under an applied tissue deformation and during pelvic floor muscle contractions. The objective of this study is to validate tactile imaging and muscle contraction parameters (markers) sensitive to the female pelvic floor conditions. Twenty-two women with normal and prolapse conditions were examined by a vaginal tactile imaging probe. We identified 9 parameters which were sensitive to prolapse conditions (p < 0.05 for one-way ANOVA and/or p < 0.05 for t-test with correlation factor r from −0.73 to −0.56). The list of parameters includes pressure, pressure gradient and dynamic pressure response during muscle contraction at identified locations. These parameters may be used for biomechanical characterization of female pelvic floor conditions to support an effective management of pelvic floor prolapse. PMID:26389014
The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.
2012-01-01
We compare and contrast measurements of the mass accommodation coefficient of water on a water surface made using ensemble and single particle techniques under conditions of supersaturation and subsaturation, respectively. In particular, we consider measurements made using an expansion chamber, a continuous flow streamwise thermal gradient cloud condensation nuclei chamber, the Leipzig Aerosol Cloud Interaction Simulator, aerosol optical tweezers, and electrodynamic balances. Although this assessment is not intended to be comprehensive, these five techniques are complementary in their approach and give values that span the range from near 0.1 to 1.0 for the mass accommodation coefficient. We use the same semianalytical treatment to assess the sensitivities of the measurements made by the various techniques to thermophysical quantities (diffusion constants, thermal conductivities, saturation pressure of water, latent heat, and solution density) and experimental parameters (saturation value and temperature). This represents the first effort to assess and compare measurements made by different techniques to attempt to reduce the uncertainty in the value of the mass accommodation coefficient. Broadly, we show that the measurements are consistent within the uncertainties inherent to the thermophysical and experimental parameters and that the value of the mass accommodation coefficient should be considered to be larger than 0.5. Accurate control and measurement of the saturation ratio is shown to be critical for a successful investigation of the surface transport kinetics during condensation/evaporation. This invariably requires accurate knowledge of the partial pressure of water, the system temperature, the droplet curvature and the saturation pressure of water. Further, the importance of including and quantifying the transport of heat in interpreting droplet measurements is highlighted; the particular issues associated with interpreting measurements of condensation
Miles, Rachael E H; Reid, Jonathan P; Riipinen, Ilona
2012-11-01
We compare and contrast measurements of the mass accommodation coefficient of water on a water surface made using ensemble and single particle techniques under conditions of supersaturation and subsaturation, respectively. In particular, we consider measurements made using an expansion chamber, a continuous flow streamwise thermal gradient cloud condensation nuclei chamber, the Leipzig Aerosol Cloud Interaction Simulator, aerosol optical tweezers, and electrodynamic balances. Although this assessment is not intended to be comprehensive, these five techniques are complementary in their approach and give values that span the range from near 0.1 to 1.0 for the mass accommodation coefficient. We use the same semianalytical treatment to assess the sensitivities of the measurements made by the various techniques to thermophysical quantities (diffusion constants, thermal conductivities, saturation pressure of water, latent heat, and solution density) and experimental parameters (saturation value and temperature). This represents the first effort to assess and compare measurements made by different techniques to attempt to reduce the uncertainty in the value of the mass accommodation coefficient. Broadly, we show that the measurements are consistent within the uncertainties inherent to the thermophysical and experimental parameters and that the value of the mass accommodation coefficient should be considered to be larger than 0.5. Accurate control and measurement of the saturation ratio is shown to be critical for a successful investigation of the surface transport kinetics during condensation/evaporation. This invariably requires accurate knowledge of the partial pressure of water, the system temperature, the droplet curvature and the saturation pressure of water. Further, the importance of including and quantifying the transport of heat in interpreting droplet measurements is highlighted; the particular issues associated with interpreting measurements of condensation
NASA Astrophysics Data System (ADS)
Sarri, A.; Guillas, S.; Day, S. J.; Dias, F.
2014-12-01
An extensive investigation on tsunami generation and the resulting coastal inundation due to coseismic seabed displacements has been performed for the Cascadia Subduction Zone. We have adopted a new approach to represent coseismic seabed deformation in giant earthquakes, which avoids artefacts generated at the edges of block deformations. We represent the deformation with arbitrary shaped 4-sided polygons in which subsidences and uplifts are represented as quadratic curves. The arbitrary shapes of these polygons allow the realistic representation of the deformed seabed as a continuous surface except at the trench, where the fault breaks the surface. Experimental Design is used to select combinations of three source characteristics generating different event scenarios amongst Cascadia whole-margin ruptures; further work on other event types is in progress. Following that, the numerical model VOLNA has been run for the different scenarios, obtaining the induced tsunami waves propagation and coastal inundation. Statistical emulation has been applied to the wave elevation time series evaluations for many locations. Statistical emulators approximate expensive computer models: they are powerful tools for analyses that require many model evaluations since they can give accurate and fast probabilistic predictions. Registration and Functional Principal Components techniques are applied to the emulation process leading to further improvement in predictions. Leave-one-out diagnostics are used to validate the emulator, showing excellent agreement in predictions and model evaluations. The statistical emulation is also used for sensitivity and uncertainty analyses. These two analyses require a large number of evaluations and hence cannot be carried out with the expensive computer model. Our approach can be applied to provide uncertainty estimates both in operational tsunami warnings and in tsunami risk modeling, and can be used with many different numerical models.
NASA Astrophysics Data System (ADS)
Hally, A.; Richard, E.; Ducrocq, V.
2013-12-01
The first Special Observation Period of the HyMeX campaign took place in the Mediterranean between September and November 2012 with the aim of better understanding the mechanisms which lead to heavy precipitation events (HPEs) in the region during the autumn months. Two such events, referred to as Intensive Observation Period 6 (IOP6) and Intensive Observation Period 7a (IOP7a), occurred respectively on 24 and 26 September over south-eastern France. IOP6 was characterised by moderate to weak low-level flow which led to heavy and concentrated convective rainfall over the plains near the coast, while IOP7a had strong low-level flow and consisted of a convective line over the mountainous regions further north and a band of stratiform rainfall further east. Firstly, an ensemble was constructed for each IOP using analyses from the AROME, AROME-WMED, ARPEGE and ECMWF operational models as initial (IC) and boundary (BC) conditions for the research model Meso-NH at a resolution of 2.5 km. A high level of model skill was seen for IOP7a, with a lower level of agreement with the observations for IOP6. Using the most accurate member of this ensemble as a CTRL simulation, three further ensembles were constructed in order to study uncertainties related to cloud physic and surface turbulence parameterisations. Perturbations were introduced by perturbing the time tendencies of the warm and cold microphysical and turbulence processes. An ensemble where all three sources of uncertainty were perturbed gave the greatest degree of dispersion in the surface rainfall for both IOPs. Comparing the level of dispersion to that of the ICBC ensemble demonstrated that when model skill is low (high) and low-level flow is weak to moderate (strong), the level of dispersion of the ICBC and physical perturbation ensembles is (is not) comparable. The level of sensitivity to these perturbations is thus concluded to be case dependent.
NASA Astrophysics Data System (ADS)
Harp, D.; Vesselinov, V. V.
2011-12-01
A newly developed methodology to model-based decision analysis is presented. The methodology incorporates a sampling approach, referred to as Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS; Harp & Vesselinov; 2011), that efficiently collects sets of acceptable solutions (i.e. acceptable model parameter sets) for different levels of a model performance metric representing the consistency of model predictions to observations. In this case, the performance metric is based on model residuals (i.e. discrepancies between observations and simulations). ABAGUS collects acceptable solutions from a discretized parameter space and stores them in a KD-tree for efficient retrieval. The parameter space domain (parameter minimum/maximum ranges) and discretization are predefined. On subsequent visits to collected locations, agents are provided with a modified value of the performance metric, and the model solution is not recalculated. The modified values of the performance metric sculpt the response surface (convexities become concavities), repulsing agents from collected regions. This promotes global exploration of the parameter space and discourages reinvestigation of regions of previously collected acceptable solutions. The resulting sets of acceptable solutions are formulated into a decision analysis using concepts from info-gap theory (Ben-Haim, 2006). Using info-gap theory, the decision robustness and opportuneness are quantified, providing measures of the immunity to failure and windfall, respectively, of alternative decisions. The approach is intended for cases where the information is extremely limited, resulting in non-probabilistic uncertainties concerning model properties such as boundary and initial conditions, model parameters, conceptual model elements, etc. The information provided by this analysis is weaker than the information provided by probabilistic decision analyses (i.e. posterior parameter distributions are not produced), however, this
4. STAIR, FROM SECOND FLOOR TO THIRD FLOOR, FROM NORTHEAST. ...
4. STAIR, FROM SECOND FLOOR TO THIRD FLOOR, FROM NORTHEAST. Plan of stair is elliptical, the inside well measuring 54' on major axis and 14' on minor axis. ALSO NOTE HIGH REEDED WAINSCOT - Saltus-Habersham House, 802 Bay Street, Beaufort, Beaufort County, SC
18. FOURTH FLOOR BLDG. 28, RAISED CONCRETE SLAB FLOOR WITH ...
18. FOURTH FLOOR BLDG. 28, RAISED CONCRETE SLAB FLOOR WITH BLOCKS AND PULLEYS OVERHEAD LOOKING NORTHEAST. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT
13. Bottom floor, tower interior showing concrete floor and cast ...
13. Bottom floor, tower interior showing concrete floor and cast iron bases for oil butts (oil butts removed when lighthouse lamp was converted to electric power.) - Block Island Southeast Light, Spring Street & Mohegan Trail at Mohegan Bluffs, New Shoreham, Washington County, RI
Floor Plans: Section "AA", Section "BB"; Floor Framing Plans: Section ...
Floor Plans: Section "A-A", Section "B-B"; Floor Framing Plans: Section "A-A", Section "B-B" - Fort Washington, Fort Washington Light, Northeast side of Potomac River at Fort Washington Park, Fort Washington, Prince George's County, MD
18. MAIN FLOOR HOLDING TANKS Main floor, looking at ...
18. MAIN FLOOR - HOLDING TANKS Main floor, looking at holding tanks against the west wall, from which sluice gates are seen protruding. Right foreground-wooden holding tanks. Note narrow wooden flumes through which fish were sluiced into holding and brining tanks. - Hovden Cannery, 886 Cannery Row, Monterey, Monterey County, CA
NASA Astrophysics Data System (ADS)
Falconer, David G.; Ueberschaer, Ronald M.
2000-07-01
Urban-warfare specialists, law-enforcement officers, counter-drug agents, and counter-terrorism experts encounter operational situations where they must assault a target building and capture or rescue its occupants. To minimize potential casualties, the assault team needs a picture of the building's interior and a copy of its floor plan. With this need in mind, we constructed a scale model of a single- story house and imaged its interior using synthetic-aperture techniques. The interior and exterior walls nearest the radar set were imaged with good fidelity, but the distal ones appear poorly defined and surrounded by ghosts and artifacts. The latter defects are traceable to beam attenuation, wavefront distortion, multiple scattering, traveling waves, resonance phenomena, and other effects not accounted for in the traditional (noninteracting, isotropic point scatterer) model for radar imaging.
Scott, Michael J.; Daly, Don S.; Zhou, Yuyu; Rice, Jennie S.; Patel, Pralit L.; McJeon, Haewon C.; Kyle, G. Page; Kim, Son H.; Eom, Jiyong; Clarke, Leon E.
2014-05-01
Improving the energy efficiency of the building stock, commercial equipment and household appliances can have a major impact on energy use, carbon emissions, and building services. Subnational regions such as U.S. states wish to increase their energy efficiency, reduce carbon emissions or adapt to climate change. Evaluating subnational policies to reduce energy use and emissions is difficult because of the uncertainties in socioeconomic factors, technology performance and cost, and energy and climate policies. Climate change may undercut such policies. Assessing these uncertainties can be a significant modeling and computation burden. As part of this uncertainty assessment, this paper demonstrates how a decision-focused sensitivity analysis strategy using fractional factorial methods can be applied to reveal the important drivers for detailed uncertainty analysis.
Gubbins, Simon; Carpenter, Simon; Baylis, Matthew; Wood, James L N; Mellor, Philip S
2008-03-01
Since 1998 bluetongue virus (BTV), which causes bluetongue, a non-contagious, insect-borne infectious disease of ruminants, has expanded northwards in Europe in an unprecedented series of incursions, suggesting that there is a risk to the large and valuable British livestock industry. The basic reproduction number, R(0), provides a powerful tool with which to assess the level of risk posed by a disease. In this paper, we compute R(0) for BTV in a population comprising two host species, cattle and sheep. Estimates for each parameter which influences R(0) were obtained from the published literature, using those applicable to the UK situation wherever possible. Moreover, explicit temperature dependence was included for those parameters for which it had been quantified. Uncertainty and sensitivity analyses based on Latin hypercube sampling and partial rank correlation coefficients identified temperature, the probability of transmission from host to vector and the vector to host ratio as being most important in determining the magnitude of R(0). The importance of temperature reflects the fact that it influences many processes involved in the transmission of BTV and, in particular, the biting rate, the extrinsic incubation period and the vector mortality rate. PMID:17638649
Some Aspects of uncertainty in computational fluid dynamics results
NASA Technical Reports Server (NTRS)
Mehta, U. B.
1991-01-01
Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.
Waterproof Raised Floor Makes Utility Lines Accessible
NASA Technical Reports Server (NTRS)
Cohen, M. M.
1984-01-01
Floor for laboratories, hospitals and factories waterproof yet allows access to subfloor utilities. Elevated access floor system designed for installations with multitude of diverse utility systems routed under and up through floor and requirement of separation of potentially conflicting utility services. Floor covered by continuous sheet of heat resealable vinyl. Floor system cut open when changes are made in utility lines and ducts. After modifications, floor covering resealed to protect subfloor utilities from spills and leaks.
Making A Precisely Level Floor
NASA Technical Reports Server (NTRS)
Simpson, William G.; Walker, William H.; Cather, Jim; Burch, John B.; Clark, Keith M.; Johnston, Dwight; Henderson, David E.
1989-01-01
Floor-pouring procedure yields large surface level, smooth, and hard. Floor made of self-leveling, slow-curing epoxy with added black pigment. Epoxy poured to thickness no greater than 0.33 in. (0.84 cm) on concrete base. Base floor seasoned, reasonably smooth and level, and at least 4 in. (10cm) thick. Base rests on thermal barrier of gravel or cinders and contains no steel plates, dividers, or bridges to minimize thermal distortion. Metal retaining wall surrounds base.
Low floor mass transit vehicle
Emmons, J. Bruce; Blessing, Leonard J.
2004-02-03
A mass transit vehicle includes a frame structure that provides an efficient and economical approach to providing a low floor bus. The inventive frame includes a stiff roof panel and a stiff floor panel. A plurality of generally vertical pillars extend between the roof and floor panels. A unique bracket arrangement is disclosed for connecting the pillars to the panels. Side panels are secured to the pillars and carry the shear stresses on the frame. A unique seating assembly that can be advantageously incorporated into the vehicle taking advantage of the load distributing features of the inventive frame is also disclosed.
21. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR ...
21. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR WAS USED FOR DEPLETED AND ENRICHED URANIUM FABRICATION. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, Uranium Rolling & Forming Operations, Southeast section of plant, southeast quadrant of intersection of Central Avenue & Eighth Street, Golden, Jefferson County, CO
23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR ...
23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR HOUSED ADMINISTRATIVE OFFICES, THE CENTRAL COMPUTING, UTILITY SYSTEMS, ANALYTICAL LABORATORIES, AND MAINTENANCE SHOPS. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO
22. VIEW OF THE SECOND FLOOR PLAN. THE SECOND FLOOR ...
22. VIEW OF THE SECOND FLOOR PLAN. THE SECOND FLOOR CONTAINS THE AIR PLENUM ND SOME OFFICE SPACE. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, Uranium Rolling & Forming Operations, Southeast section of plant, southeast quadrant of intersection of Central Avenue & Eighth Street, Golden, Jefferson County, CO
STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).
STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; McMillan, H. K.
2015-09-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; McMillan, H. K.
2015-04-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, including for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.
Pelvic floor muscle training exercises
... nlm.nih.gov/pubmed/22258946 . Dumoulin C, Hay-Smith J. Pelvic floor muscle training versus no treatment, ... nlm.nih.gov/pubmed/20091581 . Herderschee R, Hay-Smith EJC, Herbison GP, Roovers JP, Heineman MJ. Feedback ...
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 19 July 2004 The atmosphere of Mars is a dynamic system. Water-ice clouds, fog, and hazes can make imaging the surface from space difficult. Dust storms can grow from local disturbances to global sizes, through which imaging is impossible. Seasonal temperature changes are the usual drivers in cloud and dust storm development and growth.
Eons of atmospheric dust storm activity has left its mark on the surface of Mars. Dust carried aloft by the wind has settled out on every available surface; sand dunes have been created and moved by centuries of wind; and the effect of continual sand-blasting has modified many regions of Mars, creating yardangs and other unusual surface forms.
The yardangs in this image are forming in channel floor deposits. The channel itself is funneling the wind to cause the erosion.
Image information: VIS instrument. Latitude 4.5, Longitude 229.7 East (133.3 West). 19 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are
Tangential Floor in a Classroom Setting
ERIC Educational Resources Information Center
Marti, Leyla
2012-01-01
This article examines floor management in two classroom sessions: a task-oriented computer lesson and a literature lesson. Recordings made in the computer lesson show the organization of floor when a task is given to students. Temporary or "incipient" side floors (Jones and Thornborrow, 2004) emerge beside the main floor. In the literature lesson,…
Mashouf, S; Ravi, A; Morton, G; Song, W
2015-06-15
Purpose: There is a strong evidence relating post-implant dosimetry for permanent seed prostate brachytherpy to local control rates. The delineation of the prostate on CT images, however, represents a challenge as it is difficult to confidently identify the prostate borders from soft tissue surrounding it. This study aims at quantifying the sensitivity of clinically relevant dosimetric parameters to prostate contouring uncertainty. Methods: The post-implant CT images and plans for a cohort of 43 patients, who have received I–125 permanent prostate seed implant in our centre, were exported to MIM Symphony LDR brachytherapy treatment planning system (MIM Software Inc., Cleveland, OH). The prostate contours in post-implant CT images were expanded/contracted uniformly for margins of ±1.00mm, ±2.00mm, ±3.00mm, ±4.00mm and ±5.00mm (±0.01mm). The values for V100 and D90 were extracted from Dose Volume Histograms for each contour and compared. Results: The mean value of V100 and D90 was obtained as 92.3±8.4% and 108.4±12.3% respectively (Rx=145Gy). V100 was reduced by −3.2±1.5%, −7.2±3.0%, −12.8±4.0%, −19.0±4.8%, − 25.5±5.4% for expanded contours of prostate with margins of +1mm, +2mm, +3mm, +4mm, and +5mm, respectively, while it was increased by 1.6±1.2%, 2.4±2.4%, 2.7±3.2%, 2.9±4.2%, 2.9±5.1% for the contracted contours. D90 was reduced by −6.9±3.5%, −14.5±6.1%, −23.8±7.1%, − 33.6±8.5%, −40.6±8.7% and increased by 4.1±2.6%, 6.1±5.0%, 7.2±5.7%, 8.1±7.3% and 8.1±7.3% for the same set of contours. Conclusion: Systematic expansion errors of more than 1mm may likely render a plan sub-optimal. Conversely contraction errors may Result in labeling a plan likely as optimal. The use of MRI images to contour the prostate should results in better delineation of prostate organ which increases the predictive value of post-op plans. Since observers tend to overestimate the prostate volume on CT, compared with MRI, the impact of the
The National Center for Environmental Assessment (NCEA) has conducted and supported research addressing uncertainties in 2-stage clonal growth models for cancer as applied to formaldehyde. In this report, we summarized publications resulting from this research effort, discussed t...
Adjoint-Based Uncertainty Quantification with MCNP
Seifried, Jeffrey E.
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Crisp, D.; Granger, R. G.; Lambert, A.; Roche, A. E.; Kumer, J. B.; Mergenthaler, J. L.
1996-01-01
The simultaneous measurements of temperature, aerosol extinction, and of the radiatively active gases by several instruments onboard the Upper Atmosphere Research Satellite permits and assessment of the uncertainties in the diagnosed stratospheric heating rates and and in the resulting residual circulation.
Puig, Pere; Canals, Miquel; Company, Joan B; Martín, Jacobo; Amblas, David; Lastras, Galderic; Palanques, Albert
2012-09-13
Bottom trawling is a non-selective commercial fishing technique whereby heavy nets and gear are pulled along the sea floor. The direct impact of this technique on fish populations and benthic communities has received much attention, but trawling can also modify the physical properties of seafloor sediments, water–sediment chemical exchanges and sediment fluxes. Most of the studies addressing the physical disturbances of trawl gear on the seabed have been undertaken in coastal and shelf environments, however, where the capacity of trawling to modify the seafloor morphology coexists with high-energy natural processes driving sediment erosion, transport and deposition. Here we show that on upper continental slopes, the reworking of the deep sea floor by trawling gradually modifies the shape of the submarine landscape over large spatial scales. We found that trawling-induced sediment displacement and removal from fishing grounds causes the morphology of the deep sea floor to become smoother over time, reducing its original complexity as shown by high-resolution seafloor relief maps. Our results suggest that in recent decades, following the industrialization of fishing fleets, bottom trawling has become an important driver of deep seascape evolution. Given the global dimension of this type of fishery, we anticipate that the morphology of the upper continental slope in many parts of the world’s oceans could be altered by intensive bottom trawling, producing comparable effects on the deep sea floor to those generated by agricultural ploughing on land. PMID:22951970
Flooring for Schools: Unsightly Walkways
ERIC Educational Resources Information Center
Baxter, Mark
2011-01-01
Many mattress manufacturers recommend that consumers rotate their mattresses at least twice a year to help prevent soft spots from developing and increase the product's life span. It's unfortunate that the same kind of treatment can't be applied to flooring for schools, such as carpeting, especially in hallways. Being able to flip or turn a carpet…
Sea-Floor Spreading and Transform Faults
ERIC Educational Resources Information Center
Armstrong, Ronald E.; And Others
1978-01-01
Presents the Crustal Evolution Education Project (CEEP) instructional module on Sea-Floor Spreading and Transform Faults. The module includes activities and materials required, procedures, summary questions, and extension ideas for teaching Sea-Floor Spreading. (SL)
Estimating Forest Floor Carbon Content in the United States
NASA Astrophysics Data System (ADS)
Perry, C. H.; Domke, G. M.; Wilson, B. T.; Woodall, C. W.
2013-12-01
The USDA Forest Service Forest Inventory and Analysis (FIA) program conducts an annual forest inventory which includes measurements of forest floor and soil carbon content. Samples are collected on a systematic nation-wide array of approximately 7,800 plots where each one may represent up to 38,850 ha. Between 10 and 20 percent of these plots are measured on a recurring basis, and soil sampling includes measurements of both the forest floor and mineral soil (0-10 and 10-20 cm). In the United States, the current method of reporting for C stocks to international parties includes mathematical models of forest floor and mineral soil C. Forest type maps are combined with STATSGO soil survey data to generate soil C storage by forest types, but STATSGO possesses known shortcomings, particularly with respect to forest C estimation. STATSGO data are based largely on agricultural soils, so the data consistently underestimate C storage in forest floors. FIA's national-scale inventory data represent an opportunity to significantly improve our modeling and reporting capabilities because data are directly linked to forest cover and other geospatial information. Also, the FIA survey is unique in that sampling is not predicated on land use (e.g., hardwood versus softwoods, old-growth stand versus reverted agriculture) or soil type, so it is an equal probability sample of all forested soils. Given these qualities, FIA's field-observations should be used to evaluate these estimates if not replace them. Here we combined forest floor measurements with other forest inventory observations to impute forest floor C storage across the United States using nonparametric k-nearest neighbor techniques; resampling methods were used to generate estimates of uncertainty. Other predictors of forest floor formation (e.g., climate, topography, and landscape position) will be used to impute these values to satellite pixels for mapping. The end result is an estimate of landscape-level forest floor C
NASA Astrophysics Data System (ADS)
Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.
2016-09-01
A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.
Isukapalli, S S; Roy, A; Georgopoulos, P G
2000-10-01
Estimation of uncertainties associated with model predictions is an important component of the application of environmental and biological models. "Traditional" methods for propagating uncertainty, such as standard Monte Carlo and Latin Hypercube Sampling, however, often require performing a prohibitive number of model simulations, especially for complex, computationally intensive models. Here, a computationally efficient method for uncertainty propagation, the Stochastic Response Surface Method (SRSM) is coupled with another method, the Automatic Differentiation of FORTRAN (ADIFOR). The SRSM is based on series expansions of model inputs and outputs in terms of a set of "well-behaved" standard random variables. The ADIFOR method is used to transform the model code into one that calculates the derivatives of the model outputs with respect to inputs or transformed inputs. The calculated model outputs and the derivatives at a set of sample points are used to approximate the unknown coefficients in the series expansions of outputs. A framework for the coupling of the SRSM and ADIFOR is developed and presented here. Two case studies are presented, involving (1) a physiologically based pharmacokinetic model for perchloroethylene for humans, and (2) an atmospheric photochemical model, the Reactive Plume Model. The results obtained agree closely with those of traditional Monte Carlo and Latin hypercube sampling methods, while reducing the required number of model simulations by about two orders of magnitude. PMID:11110207
Design issues for floor control protocols
NASA Astrophysics Data System (ADS)
Dommel, Hans-Peter; Garcia-Luna-Aceves, Jose J.
1995-03-01
Floor control allows users of networked multimedia applications to remotely share resources like cursors, data views, video and audio channels, or entire applications without access conflicts. Floors are mutually exclusive permissions, granted dynamically to collaborating users, mitigating race conditions and guaranteeing fair and deadlock- free resource access. Although floor control is an early concept within computer-supported cooperative work, no framework exists and current floor control mechanisms are often limited to simple objects. While small-scale collaboration can be facilitated by social conventions, the importance of floors becomes evident for large-scale application sharing and teleconferencing orchestration. In this paper, the concept of a scalable session protocol is enhanced with floor control. Characteristics of collaborative environments are discussed, and session and floor control are discerned. The system's and user's requirements perspectives are discussed, including distributed storage policies, packet structure and user-interface design for floor presentation, manipulation, and triggering conditions for floor migration. Interaction stages between users, and scenarios of participant withdrawal, late joins, and establishment of subgroups are elicited with respect to floor generation, bookkeeping, and passing. An API is proposed to standardize and integrate floor control among shared applications. Finally, a concise classification for existing systems with a notion of floor control is introduced.
49 CFR 38.59 - Floor surfaces.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false Floor surfaces. 38.59 Section 38.59 Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY SPECIFICATIONS FOR TRANSPORTATION VEHICLES Rapid Rail Vehicles and Systems § 38.59 Floor surfaces. Floor...
14 CFR 25.793 - Floor surfaces.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Floor surfaces. 25.793 Section 25.793 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Floor surfaces. The floor surface of all areas which are likely to become wet in service must have...
14 CFR 25.793 - Floor surfaces.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Floor surfaces. 25.793 Section 25.793 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Floor surfaces. The floor surface of all areas which are likely to become wet in service must have...
36 CFR 1192.59 - Floor surfaces.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Floor surfaces. 1192.59 Section 1192.59 Parks, Forests, and Public Property ARCHITECTURAL AND TRANSPORTATION BARRIERS COMPLIANCE... Rail Vehicles and Systems § 1192.59 Floor surfaces. Floor surfaces on aisles, places for standees,...
49 CFR 38.59 - Floor surfaces.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false Floor surfaces. 38.59 Section 38.59 Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY SPECIFICATIONS FOR TRANSPORTATION VEHICLES Rapid Rail Vehicles and Systems § 38.59 Floor surfaces. Floor...
36 CFR 1192.59 - Floor surfaces.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Floor surfaces. 1192.59 Section 1192.59 Parks, Forests, and Public Property ARCHITECTURAL AND TRANSPORTATION BARRIERS COMPLIANCE... Rail Vehicles and Systems § 1192.59 Floor surfaces. Floor surfaces on aisles, places for standees,...
Floor furnace burns to children.
Berger, L R; Kalishman, S
1983-01-01
Three children with grid-like second-degree burns of their extremities from contact with floor furnace registers prompted an examination of this thermal hazard. Average temperature of the gratings was 294 degrees F (146 degrees C), with a range of 180 degrees to 375 degrees F (82.2 degrees to 191 degrees C). All of the furnaces tested were positioned at the entrance to bedrooms and had so little clearance that it was impossible to walk around them without contact with their surface. Infants and toddlers are at particular risk: 1 or 2 seconds of exposure would be expected to produce a serious burn. Suggestions for preventing burns from floor furnaces include turning them off when young children are at home; installing barrier gates to prevent children from coming in contact with the registers; and developing a surface coating or replacement grate with less hazardous thermal properties. PMID:6848984
Post partum pelvic floor changes.
Fonti, Ylenia; Giordano, Rosalba; Cacciatore, Alessandra; Romano, Mattea; La Rosa, Beatrice
2009-10-01
Pelvic-perineal dysfunctions, are the most common diseases in women after pregnancy. Urinary incontinence and genital prolapsy, often associated, are the most important consequences of childbirth and are determined by specific alterations in the structure of neurological and musculo-fascial pelvic support.Causation is difficult to prove because symptom occur remote from delivery.Furthermore it is unclear whether changes are secondary to the method of childbirth or to the pregnancy itself.This controversy fuels the debate about whether or not women should be offered the choice of elective caesarean delivery to avoid the development of subsequent pelvic floor disfunction.But it has been demonstrated that pregnancy itself, by means of mechanical changes of pelvic statics and changes in hormones, can be a significant risk factor for these diseases. Especially is the first child to be decisive for the stability of the pelvic floor.During pregnancy, the progressive increase in volume of the uterus subject perineal structures to a major overload. During delivery, the parties present and passes through the urogenital hiatus leading to growing pressure on the tissues causing the stretching of the pelvic floor with possible muscle damage, connective tissue and / or nervous.In this article we aim to describe genitourinary post partum changes with particular attention to the impact of pregnancy or childbirth on these changes. PMID:22439048
Pelvic floor ultrasonography: an update.
Shek, K L; Dietz, H-P
2013-02-01
Female pelvic floor dysfunction encompasses a number of highly prevalent clinical conditions such as female pelvic organ prolapse, urinary and fecal incontinence, and sexual dysfunction. The etiology and pathophysiology of those conditions are, however, not well understood. Recent technological advances have seen a surge in the use of imaging, both in research and clinical practice. Among the techniques available such as sonography, X-ray, computed tomography and magnetic resonance imaging, ultrasound is superior for pelvic floor imaging, especially in the form of perineal or translabial imaging. The technique is safe with no radiation, simple, cheap, easily accessible and provides high spatial and temporal resolutions. Translabial or perineal ultrasound is useful in determining residual urinary volume, detrusor wall thickness, bladder neck mobility and in assessing pelvic organ prolapse as well as levator function and anatomy. It is at least equivalent to other imaging techniques in diagnosing, such diverse conditions as urethral diverticula, rectal intussusception and avulsion of the puborectalis muscle. Ultrasound is the only imaging method capable of visualizing modern slings and mesh implants and may help selecting patients for implant surgery. Delivery-related levator injury seems to be the most important etiological factor for pelvic organ prolapse and recurrence after prolapse surgery, and it is most conveniently diagnosed by pelvic floor ultrasound. This review gives an overview of the methodology. Its main current uses in clinical assessment and research will also be discussed. PMID:23412016
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Scaling on a limestone flooring
NASA Astrophysics Data System (ADS)
Carmona-Quiroga, P. M.; Blanco-Varela, M. T.; Martínez-Ramírez, S.
2012-04-01
Natural stone can be use on nearly every surface, inside and outside buildings, but decay is more commonly reported from the ones exposed to outdoor aggressively conditions. This study instead, is an example of limestone weathering of uncertain origin in the interior of a residential building. The stone, used as flooring, started to exhibit loss of material in the form of scaling. These damages were observed before the building, localized in the South of Spain (Málaga), was inhabited. Moreover, according to the company the limestone satisfies the following European standards UNE-EN 1341: 2002, UNE-EN 1343: 2003; UNE-EN 12058: 2004 for floorings. Under these circumstances the main objective of this study was to assess the causes of this phenomenon. For this reason the composition of the mortar was determined and the stone was characterized from a mineralogical and petrological point of view. The last material, which is a fossiliferous limestone from Egypt with natural fissure lines, is mainly composed of calcite, being quartz, kaolinite and apatite minor phases. Moreover, under different spectroscopic and microscopic techniques (FTIR, micro-Raman, SEM-EDX, etc) samples of the weathered, taken directly from the buildings, and unweathered limestone tiles were examined and a new mineralogical phase, trona, was identified at scaled areas which are connected with the natural veins of the stone. In fact, through BSE-mapping the presence of sodium has been detected in these veins. This soluble sodium carbonate would was dissolved in the natural waters from which limestone was precipitated and would migrate with the ascendant capilar humidity and crystallized near the surface of the stone starting the scaling phenomenon which in historic masonry could be very damaging. Therefore, the weathering of the limestone would be related with the hygroscopic behaviour of this salt, but not with the constructive methods used. This makes the limestone unable to be used on restoration
NASA Astrophysics Data System (ADS)
Gibson, G. A.; Spitz, Y. H.
2011-11-01
We use a series of Monte Carlo experiments to explore simultaneously the sensitivity of the BEST marine ecosystem model to environmental forcing, initial conditions, and biological parameterizations. Twenty model output variables were examined for sensitivity. The true sensitivity of biological and environmental parameters becomes apparent only when each parameter is allowed to vary within its realistic range. Many biological parameters were important only to their corresponding variable, but several biological parameters, e.g., microzooplankton grazing and small phytoplankton doubling rate, were consistently very important to several output variables. Assuming realistic biological and environmental variability, the standard deviation about simulated mean mesozooplankton biomass ranged from 1 to 14 mg C m - 3 during the year. Annual primary productivity was not strongly correlated with temperature but was positively correlated with initial nitrate and light. Secondary productivity was positively correlated with primary productivity and negatively correlated with spring bloom timing. Mesozooplankton productivity was not correlated with water temperature, but a shift towards a system in which smaller zooplankton undertake a greater proportion of the secondary production as the water temperature increases appears likely. This approach to incorporating environmental variability within a sensitivity analysis could be extended to any ecosystem model to gain confidence in climate-driven ecosystem predictions.
Uncertainty, joint uncertainty, and the quantum uncertainty principle
NASA Astrophysics Data System (ADS)
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-03-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found.
Pelvic floor ultrasound: a review.
Dietz, Hans Peter
2010-04-01
Imaging currently plays a limited role in the investigation of pelvic floor disorders. It is obvious that magnetic resonance imaging has limitations in urogynecology and female urology at present due to cost and access limitations and due to the fact that it is generally a static, not a dynamic, method. However, none of those limitations apply to sonography, a diagnostic method that is very much part of general practice in obstetrics and gynecology. Translabial or transperineal ultrasound is helpful in determining residual urine; detrusor wall thickness; bladder neck mobility; urethral integrity; anterior, central, and posterior compartment prolapse; and levator anatomy and function. It is at least equivalent to other imaging methods in visualizing such diverse conditions as urethral diverticula, rectal intussusception, mesh dislodgment, and avulsion of the puborectalis muscle. Ultrasound is the only imaging method able to visualize modern mesh slings and implants and may predict who actually needs such implants. Delivery-related levator trauma is the most important known etiologic factor for pelvic organ prolapse and not difficult to diagnose on 3-/4-dimensional and even on 2-dimensional pelvic floor ultrasound. It is likely that this will be an important driver behind the universal use of this technology. This review gives an overview of the method and its main current uses in clinical assessment and research. PMID:20350640
Crash Tests of Protective Airplane Floors
NASA Technical Reports Server (NTRS)
Carden, H. D.
1986-01-01
Energy-absorbing floors reduce structural buckling and impact forces on occupants. 56-page report discusses crash tests of energy-absorbing aircraft floors. Describes test facility and procedures; airplanes, structural modifications, and seats; crash dynamics; floor and seat behavior; and responses of anthropometric dummies seated in airplanes. Also presents plots of accelerations, photographs and diagrams of test facility, and photographs and drawings of airplanes before, during, and after testing.
NASA Astrophysics Data System (ADS)
Lomax, A. J.
2008-02-01
Simple tools for studying the effects of inter-fraction and inter-field motions on intensity modulated proton therapy (IMPT) plans have been developed, and have been applied to both 3D and distal edge tracking (DET) IMPT plans. For the inter-fraction motion, we have investigated the effects of misaligned density heterogeneities, whereas for the inter-field motion analysis, the effects of field misalignment on the plans have been assessed. Inter-fraction motion problems have been analysed using density differentiated error (DDE) distributions, which specifically show the additional problems resulting from misaligned density heterogeneities for proton plans. Likewise, for inter-field motion, we present methods for calculating motion differentiated error (MDE) distributions. DDE and MDE analysis of all plans demonstrate that the 3D approach is generally more robust to both inter-fraction and inter-field motions than the DET approach, but that strong in-field dose gradients can also adversely affect a plan's robustness. An important additional conclusion is that, for certain IMPT plans, even inter-fraction errors cannot necessarily be compensated for by the use of a simple PTV margins, implying that more sophisticated tools need to be developed for uncertainty management and assessment for IMPT treatments at the treatment planning level.
The floor plate: multiple cells, multiple signals.
Placzek, Marysia; Briscoe, James
2005-03-01
One of the key organizers in the CNS is the floor plate - a group of cells that is responsible for instructing neural cells to acquire distinctive fates, and that has an important role in establishing the elaborate neuronal networks that underlie the function of the brain and spinal cord. In recent years, considerable controversy has arisen over the mechanism by which floor plate cells form. Here, we describe recent evidence that indicates that discrete populations of floor plate cells, with characteristic molecular properties, form in different regions of the neuraxis, and we discuss data that imply that the mode of floor plate induction varies along the anteroposterior axis. PMID:15738958
Side Elevation; 1/4 Plans of Floor Framing, Floor Planking, Roof ...
Side Elevation; 1/4 Plans of Floor Framing, Floor Planking, Roof Framing and Roof; Longitudinal Section, Cross Section, End Elevation - Eames Covered Bridge, Spanning Henderson Creek, Oquawka, Henderson County, IL
17. 4th floor roof, view south, 4th and 5th floor ...
17. 4th floor roof, view south, 4th and 5th floor setback to left and atrium structure to right - Sheffield Farms Milk Plant, 1075 Webster Avenue (southwest corner of 166th Street), Bronx, Bronx County, NY
Eastern Floor of Holden Crater
NASA Technical Reports Server (NTRS)
2002-01-01
(Released 15 April 2002) The Science Today's THEMIS image covers territory on the eastern floor of Holden Crater, which is located in region of the southern hemisphere called Noachis Terra. Holden Crater is 154 km in diameter and named after American Astronomer Edward Holden (1846-1914). This image shows a mottled surface with channels, hills, ridges and impact craters. The largest crater seen in this image is 5 km in diameter. This crater has gullies and what appears to be horizontal layers in its walls. The Story With its beautiful symmetry and gullies radially streaming down to the floor, the dominant crater in this image is an impressive focal point. Yet, it is really just a small crater within a much larger one named Holden Crater. Take a look at the context image to the right to see just how much bigger Holden Crater is. Then come back to the image strip that shows the mottled surface of Holden Crater's eastern floor in greater detail, and count how many hills, ridges, channels, and small impact craters can be seen. No perfectly smooth terrain abounds there, that's for sure. The textured terrain of Holden Crater has been particularly intriguing ever since the Mars Orbital Camera on the Mars Global Surveyor spacecraft found evidence of sedimentary rock layers there that might have formed in lakes or shallow seas in Mars' ancient past. This finding suggests that Mars may have been more like Earth long ago, with water on its surface. Holden Crater might even have held a lake long ago. No one knows for sure, but it's an exciting possibility. Why? If water was once on the surface of Mars long enough to form sedimentary materials, maybe it was there long enough for microbial life to have developed too. (Life as we know it just isn't possible without the long-term presence of liquid water.) The question of life on the red planet is certainly tantalizing, but scientists will need to engage in a huge amount of further investigation to begin to know the answer. That
Credible Computations: Standard and Uncertainty
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B.; VanDalsem, William (Technical Monitor)
1995-01-01
The discipline of computational fluid dynamics (CFD) is at a crossroad. Most of the significant advances related to computational methods have taken place. The emphasis is now shifting from methods to results. Significant efforts are made in applying CFD to solve design problems. The value of CFD results in design depends on the credibility of computed results for the intended use. The process of establishing credibility requires a standard so that there is a consistency and uniformity in this process and in the interpretation of its outcome. The key element for establishing the credibility is the quantification of uncertainty. This paper presents salient features of a proposed standard and a procedure for determining the uncertainty. A customer of CFD products - computer codes and computed results - expects the following: A computer code in terms of its logic, numerics, and fluid dynamics and the results generated by this code are in compliance with specified requirements. This expectation is fulfilling by verification and validation of these requirements. The verification process assesses whether the problem is solved correctly and the validation process determines whether the right problem is solved. Standards for these processes are recommended. There is always some uncertainty, even if one uses validated models and verified computed results. The value of this uncertainty is important in the design process. This value is obtained by conducting a sensitivity-uncertainty analysis. Sensitivity analysis is generally defined as the procedure for determining the sensitivities of output parameters to input parameters. This analysis is a necessary step in the uncertainty analysis, and the results of this analysis highlight which computed quantities and integrated quantities in computations need to be determined accurately and which quantities do not require such attention. Uncertainty analysis is generally defined as the analysis of the effect of the uncertainties
Neutrino floor at ultralow threshold
NASA Astrophysics Data System (ADS)
Strigari, Louis E.
2016-05-01
By lowering their energy threshold, direct dark matter searches can reach the neutrino floor with experimental technology that is now in development. The 7Be flux can be detected with ˜10 eV nuclear recoil energy threshold and 50 kg/yr exposure. The p e p flux can be detected with ˜3 ton/yr exposure, and the first detection of the CNO flux is possible with similar exposure. The p p flux can be detected with threshold of ˜eV and only ˜ kg /yr exposure. These can be the first pure neutral current measurements of the low-energy solar neutrino flux. Measuring this flux is important for low mass dark matter searches and for understanding the solar interior.
16. THIRD FLOOR BLDG. 28A, DETAIL CUTOUT IN FLOOR FOR ...
16. THIRD FLOOR BLDG. 28A, DETAIL CUTOUT IN FLOOR FOR WOOD BLOCK FLOORING LOOKING EAST. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT
9 CFR 91.26 - Concrete flooring.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Concrete flooring. 91.26 Section 91.26... LIVESTOCK FOR EXPORTATION Inspection of Vessels and Accommodations § 91.26 Concrete flooring. (a) Pens aboard an ocean vessel shall have a 3 inch concrete pavement, proportioned and mixed to give 2000...
9 CFR 91.26 - Concrete flooring.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Concrete flooring. 91.26 Section 91.26... LIVESTOCK FOR EXPORTATION Inspection of Vessels and Accommodations § 91.26 Concrete flooring. (a) Pens aboard an ocean vessel shall have a 3 inch concrete pavement, proportioned and mixed to give 2000...
9 CFR 91.26 - Concrete flooring.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Concrete flooring. 91.26 Section 91.26... LIVESTOCK FOR EXPORTATION Inspection of Vessels and Accommodations § 91.26 Concrete flooring. (a) Pens aboard an ocean vessel shall have a 3 inch concrete pavement, proportioned and mixed to give 2000...
9 CFR 91.26 - Concrete flooring.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Concrete flooring. 91.26 Section 91.26... LIVESTOCK FOR EXPORTATION Inspection of Vessels and Accommodations § 91.26 Concrete flooring. (a) Pens aboard an ocean vessel shall have a 3 inch concrete pavement, proportioned and mixed to give 2000...
9 CFR 91.26 - Concrete flooring.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false Concrete flooring. 91.26 Section 91.26... LIVESTOCK FOR EXPORTATION Inspection of Vessels and Accommodations § 91.26 Concrete flooring. (a) Pens aboard an ocean vessel shall have a 3 inch concrete pavement, proportioned and mixed to give 2000...
Floor Time: Rethinking Play in the Classroom
ERIC Educational Resources Information Center
Kordt-Thomas, Chad; Lee, Ilene M.
2006-01-01
Floor time is a play-based, one-to-one approach to helping children develop relationships, language, and thinking. Developed by child psychiatrist Stanley Greenspan, floor time is helpful not only for children with special needs but also for children who are developing typically. It can be used by teachers, caregivers, and families in brief…
Learning4Life on the Exhibit Floor
ERIC Educational Resources Information Center
Sullivan, Margaret
2009-01-01
The exhibit floor is a wealth of knowledge. One can read, view, and listen to information presented in many formats. Somewhere on the exhibit floor there are experts on every topic, ready and waiting for one's questions. But like any research topic, frequently a structured search is required to find the best answers. This article discusses how to…
Lamboni, Matieyendou; Sanaa, Moez; Tenenhaus-Aziza, Fanny
2014-04-01
Microbiological food safety is an important economic and health issue in the context of globalization and presents food business operators with new challenges in providing safe foods. The hazard analysis and critical control point approach involve identifying the main steps in food processing and the physical and chemical parameters that have an impact on the safety of foods. In the risk-based approach, as defined in the Codex Alimentarius, controlling these parameters in such a way that the final products meet a food safety objective (FSO), fixed by the competent authorities, is a big challenge and of great interest to the food business operators. Process risk models, issued from the quantitative microbiological risk assessment framework, provide useful tools in this respect. We propose a methodology, called multivariate factor mapping (MFM), for establishing a link between process parameters and compliance with a FSO. For a stochastic and dynamic process risk model of Listeriamonocytogenes in soft cheese made from pasteurized milk with many uncertain inputs, multivariate sensitivity analysis and MFM are combined to (i) identify the critical control points (CCPs) for L.monocytogenes throughout the food chain and (ii) compute the critical limits of the most influential process parameters, located at the CCPs, with regard to the specific process implemented in the model. Due to certain forms of interaction among parameters, the results show some new possibilities for the management of microbiological hazards when a FSO is specified. PMID:24168722
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Bencic, T.; Sullivan, J. P.
1999-01-01
This article reviews new advances and applications of pressure sensitive paints in aerodynamic testing. Emphasis is placed on important technical aspects of pressure sensitive paint including instrumentation, data processing, and uncertainty analysis.
Charoy, Camille; Arbeille, Elise; Thoinet, Karine; Castellani, Valérie
2014-01-01
During development, progenitors and post-mitotic neurons receive signals from adjacent territories that regulate their fate. The floor-plate is a group of glial cells lining the ependymal canal at ventral position. The floor-plate expresses key morphogens contributing to the patterning of cell lineages in the spinal cord. At later developmental stages, the floor-plate regulates the navigation of axons in the spinal cord, acting as a barrier to prevent the crossing of ipsilateral axons and controlling midline crossing by commissural axons1. These functions are achieved through the secretion of various guidance cues. Some of these cues act as attractants and repellents for the growing axons while others regulate guidance receptors and downstream signaling to modulate the sensitivity of the axons to the local guidance cues2,3. Here we describe a method that allows investigating the properties of floor-plate derived signals in a variety of developmental contexts, based on the production of Floor-Plate conditioned medium (FPcm)4-6. We then exemplify the use of this FPcm in the context of axon guidance. First, the spinal cord is isolated from mouse embryo at E12.5 and the floor-plate is dissected out and cultivated in a plasma-thrombin matrix (Figure 1). Second two days later, commissural tissue are dissected out from E12.5 embryos, triturated and exposed to the FPcm. Third, the tissue are processed for Western blot analysis of commissural markers. PMID:24561889
Ultrasonic Inspection Of The LTAB Floor
Thomas, G
2001-07-31
The National Ignition Facility's (NIF) floor is damaged by transporter operations. Two basic operations, rotating the wheels in place and traversing the floor numerous times can cause failure in the grout layer. The floor is composed of top wear surface (Stonhard) and an osmotic grout layer on top of concrete, Fig. 1. An ultrasonic technique was implemented to assess the condition of the floor as part of a study to determine the damage mechanisms. The study considered damage scenarios and ways to avoid the damage. A possible solution is to install thin steel plates where the transporter traverses on the floor. These tests were conducted with a fully loaded transporter that applies up to 1300 psi loads to the floor. A contact ultrasonic technique evaluated the condition of the grout layer in NIF's floor. Figure 1 displays the configuration of the ultrasonic transducer on the floor. We inspected the floor after wheel rotation damage and after wheel traversal damage. Figure 2a and 2b are photographs of the portable ultrasonic system and data acquisition. We acquired ultrasonic signals in a known pristine area and a damaged area to calibrate the inspection. Figure 3 is a plot of the typical ultrasonic response from an undamaged area (black) overlapped with a signal (red) from a damaged area. The damage area data was acquired at a location next to a hole in the floor that was caused by the transporter. Five megahertz pulses are propagated from the transducer and through a Plexiglas buffer rod into the floor. The ultrasonic pulse reflects from each discontinuity in the floor. The ultrasonic signal reflects from the top surface, the Stonhard-to-grout interface, and the grout to concrete interface. We expect to see reflections from each of these interfaces in an undamaged floor. If the grout layer pulverizes then the high frequency signal cannot traverse the layer and the grout to concrete interface signal will decrease or vanish. The more damage to the grout the more the
41. Ground level photograph of two floors of skeleton complete ...
41. Ground level photograph of two floors of skeleton complete with 3rd and 4th floors being started,upper floors of county bldg visible - Chicago City Hall, 121 North LaSalle Street, Chicago, Cook County, IL
Typical Newel Post, First Floor Newel Post, Typical Baluster, Typical ...
Typical Newel Post, First Floor Newel Post, Typical Baluster, Typical Nosing, First Floor Stringer Profile, Second Floor Stringer Profile - National Home for Disabled Volunteer Soldiers - Battle Mountain Sanitarium, Treasurer's Quarters, 500 North Fifth Street, Hot Springs, Fall River County, SD
Uncertainty of temperature measurement with thermal cameras
NASA Astrophysics Data System (ADS)
Chrzanowski, Krzysztof; Matyszkiel, Robert; Fischer, Joachim; Barela, Jaroslaw
2001-06-01
All main international metrological organizations are proposing a parameter called uncertainty as a measure of the accuracy of measurements. A mathematical model that enables the calculations of uncertainty of temperature measurement with thermal cameras is presented. The standard uncertainty or the expanded uncertainty of temperature measurement of the tested object can be calculated when the bounds within which the real object effective emissivity (epsilon) r, the real effective background temperature Tba(r), and the real effective atmospheric transmittance (tau) a(r) are located and can be estimated; and when the intrinsic uncertainty of the thermal camera and the relative spectral sensitivity of the thermal camera are known.
Analysis of roof and pillar failure associated with weak floor at a limestone mine
Murphy, Michael M.; Ellenberger, John L.; Esterhuizen, Gabriel S.; Miller, Tim
2016-01-01
A limestone mine in Ohio has had instability problems that have led to massive roof falls extending to the surface. This study focuses on the role that weak, moisture-sensitive floor has in the instability issues. Previous NIOSH research related to this subject did not include analysis for weak floor or weak bands and recommended that when such issues arise they should be investigated further using a more advanced analysis. Therefore, to further investigate the observed instability occurring on a large scale at the Ohio mine, FLAC3D numerical models were employed to demonstrate the effect that a weak floor has on roof and pillar stability. This case study will provide important information to limestone mine operators regarding the impact of weak floor causing the potential for roof collapse, pillar failure, and subsequent subsidence of the ground surface. PMID:27088041
Surgical treatment of orbital floor fractures.
Rankow, R M; Mignogna, F V
1975-01-01
Ninety patients with orbital floor fractures were treated by the Otolaryngology Service of the Columbia-Presbyterian Medical Center. Of these 90 patients, 58 were classified as coexisting and 32 as isolated. All fractures with clinical symptoms and demonstrable x-ray evidence should be explored. Despite negative findings by routine techniques, laminography may confirm fractures in all clinically suspicious cases. In this series, 100% of the patients explored had definitive fractures. A direct infraorbital approach adequately exposes the floor of the orbit. An effective and cosmetic subtarsal incision was utilized. Implants were employed when the floor could not be anatomically reapproximated or the periorbita was destroyed. PMID:1119982
The Relationship of Cultural Similarity, Communication Effectiveness and Uncertainty Reduction.
ERIC Educational Resources Information Center
Koester, Jolene; Olebe, Margaret
To investigate the relationship of cultural similarity/dissimilarity, communication effectiveness, and communication variables associated with uncertainty reduction theory, a study examined two groups of students--a multinational group living on an "international floor" in a dormitory at a state university and an unrelated group of U.S. students…
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Uncertainties in climate stabilization
Wigley, T. M.; Clarke, Leon E.; Edmonds, James A.; Jacoby, H. D.; Paltsev, S.; Pitcher, Hugh M.; Reilly, J. M.; Richels, Richard G.; Sarofim, M. C.; Smith, Steven J.
2009-11-01
We explore the atmospheric composition, temperature and sea level implications of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models for U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 2.1a. We also consider an extension of one of these sets of scenarios out to 2300. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at 1000 ppm in 2200 – but even to achieve this target requires large and rapid CO2 emissions reductions over the 22nd century. Future temperature changes for the Level 1 stabilization case show considerable uncertainty even when a common set of climate model parameters is used (a result of different assumptions for non-Kyoto gases). Uncertainties are about a factor of three when climate sensitivity uncertainties are accounted for. We estimate the probability that warming from pre-industrial times will be less than 2oC to be about 50%. For one of the IA models, warming in the Level 1 case is greater out to 2050 than in the reference case, due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. Sea level rise uncertainties for the Level 1 case are very large, with increases ranging from 12 to 100 cm over 2000 to 2300.
The maintenance of uncertainty
NASA Astrophysics Data System (ADS)
Smith, L. A.
Introduction Preliminaries State-space dynamics Linearized dynamics of infinitesimal uncertainties Instantaneous infinitesimal dynamics Finite-time evolution of infinitesimal uncertainties Lyapunov exponents and predictability The Baker's apprentice map Infinitesimals and predictability Dimensions The Grassberger-Procaccia algorithm Towards a better estimate from Takens' estimators Space-time-separation diagrams Intrinsic limits to the analysis of geometry Takens' theorem The method of delays Noise Prediction, prophecy, and pontification Introduction Simulations, models and physics Ground rules Data-based models: dynamic reconstructions Analogue prediction Local prediction Global prediction Accountable forecasts of chaotic systems Evaluating ensemble forecasts The annulus Prophecies Aids for more reliable nonlinear analysis Significant results: surrogate data, synthetic data and self-deception Surrogate data and the bootstrap Surrogate predictors: Is my model any good? Hints for the evaluation of new techniques Avoiding simple straw men Feasibility tests for the identification of chaos On detecting "tiny" data sets Building models consistent with the observations Cost functions ι-shadowing: Is my model any good? (reprise) Casting infinitely long shadows (out-of-sample) Distinguishing model error and system sensitivity Forecast error and model sensitivity Accountability Residual predictability Deterministic or stochastic dynamics? Using ensembles to distinguish the expectation from the expected Numerical Weather Prediction Probabilistic prediction with a deterministic model The analysis Constructing and interpreting ensembles The outlook(s) for today Conclusion Summary
Generation of airborne listeria from floor drains
Technology Transfer Automated Retrieval System (TEKTRAN)
Listeria monocytogenes can colonize floor drains in poultry processing and further processing facilities remaining even after cleaning and disinfection. Therefore, during wash down, workers exercise caution to prevent escape and transfer of drain microflora to food contact surfaces. The objective ...
Impact evaluation of composite floor sections
NASA Technical Reports Server (NTRS)
Boitnott, Richard L.; Fasanella, Edwin L.
1989-01-01
Graphite-epoxy floor sections representative of aircraft fuselage construction were statically and dynamically tested to evaluate their response to crash loadings. These floor sections were fabricated using a frame-stringer design typical of present aluminum aircraft without features to enhance crashworthiness. The floor sections were tested as part of a systematic research program developed to study the impact response of composite components of increasing complexity. The ultimate goal of the research program is to develop crashworthy design features for future composite aircraft. Initially, individual frames of six-foot diameter were tested both statically and dynamically. The frames were then used to construct built-up floor sections for dynamic tests at impact velocities of approximately 20 feet/sec to simulate survivable crash velocities. In addition, static tests were conducted to gain a better understanding of the failure mechanisms seen in the dynamic tests.
Magnetic Resonance Imaging (MRI): Dynamic Pelvic Floor
... a powerful magnetic field, radio waves and a computer to produce detailed pictures of the pelvic floor, ... powerful magnetic field, radio frequency pulses and a computer to produce detailed pictures of organs, soft tissues, ...
Pelvic floor muscle rehabilitation using biofeedback.
Newman, Diane K
2014-01-01
Pelvic floor muscle exercises have been recommended for urinary incontinence since first described by obstetrician gynecologist Dr. Arnold Kegel more than six decades ago. These exercises are performed to strengthen pelvic floor muscles, provide urethral support to prevent urine leakage, and suppress urgency. In clinical urology practice, expert clinicians also teach patients how to relax the muscle to improve bladder emptying and relieve pelvic pain caused by muscle spasm. When treating lower urinary tract symptoms, an exercise training program combined with biofeedback therapy has been recommended as first-line treatment. This article provides clinical application of pelvic floor muscle rehabilitation using biofeedback as a technique to enhance pelvic floor muscle training. PMID:25233622
Gureghian, A.B.; Wu, Y.T.; Sagar, B.; Codell, R.A.
1992-12-01
Exact analytical solutions based on the Laplace transforms are derived for describing the one-dimensional space-time-dependent, advective transport of a decaying species in a layered, saturated rock system intersected by a planar fracture of varying aperture. These solutions, which account for advection in fracture, molecular diffusion into the rock matrix, adsorption in both fracture and matrix, and radioactive decay, predict the concentrations in both fracture and rock matrix and the cumulative mass in the fracture. The solute migration domain in both fracture and rock is assumed to be semi-infinite with non-zero initial conditions. The concentration of each nuclide at the source is allowed to decay either continuously or according to some periodical fluctuations where both are subjected to either a step or band release mode. Two numerical examples related to the transport of Np-237 and Cm-245 in a five-layered system of fractured rock were used to verify these solutions with several well established evaluation methods of Laplace inversion integrals in the real and complex domain. In addition, with respect to the model parameters, a comparison of the analytically derived local sensitivities for the concentration and cumulative mass of Np-237 in the fracture with the ones obtained through a finite-difference method of approximation is also reported.
Performance evaluation of floor thermal storage system
Shinkai, Koichiro; Kasuya, Atsushi; Kato, Masahiro
2000-07-01
Environmental issues were seriously addressed when a new building was designed with district heating and cooling for the Osaka gas company. As a result, the building was officially recognized as Environmentally Conscious Building No. 1 by the Construction Ministry. In order to reduce cost by peak shaving, adoption of a floor thermal storage system was planned. This paper describes results regarding the peak shaving by floor thermal storage system in designing the air-conditioning system.
Physical therapy for female pelvic floor disorders.
Bourcier, A P
1994-08-01
Non-surgical, non-pharmacological treatment for female pelvic floor dysfunction is represented by rehabilitation in urogynecology. Since Kegel, in 1948, who proposed the concept of functional restoration of the perineal muscles, no specific term has actually been established. Owing to the number of specialists involved in the management of female pelvic floor disorders (such as gynecologists, urologists, coloproctologists, and neurologists) and the different types of health care providers concerned (such as physicians, physical therapists, nurses, and midwives), it is difficult to make the proper choice between 'physical therapy for pelvic floor', 'pelvic floor rehabilitation', 'pelvic muscle re-education', and 'pelvic floor training'. Because muscle re-education is under the control of physical therapists, we have chosen the term of physical therapy for female pelvic floor disorders. Muscle re-education has an important role in the primary treatment of lower urinary tract dysfunction. A multidisciplinary collaboration may be of particular interest, and a thorough evaluation is useful for a proper selection of patients. PMID:7742496
ETRA, TRA642. ON BASEMENT FLOOR. IBEAM COLUMNS SUPPORTING CONSOLE FLOOR ...
ETRA, TRA-642. ON BASEMENT FLOOR. I-BEAM COLUMNS SUPPORTING CONSOLE FLOOR HAVE BEEN SURROUNDED BY CONCRETE IN RECTANGULAR PILLARS. BASEMENT FLOOR IS BEING PREPARED FOR PLACEMENT OF CONCRETE. ABOVE CEILING IS CONSOLE FLOOR, IN WHICH CUT-OUT HAS PRESERVED SPACE FOR REACTOR AND ITS SHIELDING. CIRCULAR FORM IN REACTOR AREA IS CONCRETE FORMING. NOTE VERTICAL CONDUIT AT INTERVALS AROUND REACTOR PITS. INL NEGATIVE NO. 56-1237. Jack L. Anderson, Photographer, 4/17/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Messaging climate change uncertainty
NASA Astrophysics Data System (ADS)
Cooke, Roger M.
2015-01-01
Climate change is full of uncertainty and the messengers of climate science are not getting the uncertainty narrative right. To communicate uncertainty one must first understand it, and then avoid repeating the mistakes of the past.
3MRA UNCERTAINTY AND SENSITIVITY ANALYSIS
This presentation discusses the Multimedia, Multipathway, Multireceptor Risk Assessment (3MRA) modeling system. The outline of the presentation is: modeling system overview - 3MRA versions; 3MRA version 1.0; national-scale assessment dimensionality; SuperMUSE: windows-based super...
Uncertainty Quantification in Solidification Modelling
NASA Astrophysics Data System (ADS)
Fezi, K.; Krane, M. J. M.
2015-06-01
Numerical models have been used to simulate solidification processes, to gain insight into physical phenomena that cannot be observed experimentally. Often validation of such models has been done through comparison to a few or single experiments, in which agreement is dependent on both model and experimental uncertainty. As a first step to quantifying the uncertainty in the models, sensitivity and uncertainty analysis were performed on a simple steady state 1D solidification model of continuous casting of weld filler rod. This model includes conduction, advection, and release of latent heat was developed for use in uncertainty quantification in the calculation of the position of the liquidus and solidus and the solidification time. Using this model, a Smolyak sparse grid algorithm constructed a response surface that fit model outputs based on the range of uncertainty in the inputs to the model. The response surface was then used to determine the probability density functions (PDF's) of the model outputs and sensitivities of the inputs. This process was done for a linear fraction solid and temperature relationship, for which there is an analytical solution, and a Scheil relationship. Similar analysis was also performed on a transient 2D model of solidification in a rectangular domain.
DO MODEL UNCERTAINTY WITH CORRELATED INPUTS
The effect of correlation among the input parameters and variables on the output uncertainty of the Streeter-Phelps water quality model is examined. hree uncertainty analysis techniques are used: sensitivity analysis, first-order error analysis, and Monte Carlo simulation. odifie...
76 FR 7098 - Dealer Floor Plan Pilot Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-09
... ADMINISTRATION 13 CFR Parts 120 and 121 Dealer Floor Plan Pilot Program AGENCY: U.S. Small Business... Dealer Floor Plan Pilot Program to make available 7(a) loan guaranties for lines of credit that provide floor plan financing. This new Dealer Floor Plan Pilot Program was created in the Small Business...
2. VIEW OF LOWER MILL FLOOR FOUNDATION, SHOWING, LEFT TO ...
2. VIEW OF LOWER MILL FLOOR FOUNDATION, SHOWING, LEFT TO RIGHT, EDGE OF MILLING FLOOR, TABLE FLOOR, VANNING FLOOR, LOADING LEVEL, TAILINGS POND IN RIGHT BACKGROUND. VIEW IS LOOKING FROM THE NORTHWEST - Mountain King Gold Mine & Mill, 4.3 Air miles Northwest of Copperopolis, Copperopolis, Calaveras County, CA
Xia, Hao; Wang, Xiaogang; Qiao, Yanyou; Jian, Jun; Chang, Yuanfei
2015-01-01
Following the popularity of smart phones and the development of mobile Internet, the demands for accurate indoor positioning have grown rapidly in recent years. Previous indoor positioning methods focused on plane locations on a floor and did not provide accurate floor positioning. In this paper, we propose a method that uses multiple barometers as references for the floor positioning of smart phones with built-in barometric sensors. Some related studies used barometric formula to investigate the altitude of mobile devices and compared the altitude with the height of the floors in a building to obtain the floor number. These studies assume that the accurate height of each floor is known, which is not always the case. They also did not consider the difference in the barometric-pressure pattern at different floors, which may lead to errors in the altitude computation. Our method does not require knowledge of the accurate heights of buildings and stories. It is robust and less sensitive to factors such as temperature and humidity and considers the difference in the barometric-pressure change trends at different floors. We performed a series of experiments to validate the effectiveness of this method. The results are encouraging. PMID:25835189
Yazgi, H; Uyanik, M H; Ayyildiz, A
2009-01-01
This study investigated the colonization of slime-producing coagulase-negative Staphylococcus (CoNS) in 80 patient wards in Turkey (40 vinyl and 40 ceramic tile floors). A total of 480 samples that included 557 CoNS isolates were obtained. Slime production was investigated with the Christensen method and methicillin-susceptibility was tested by the disk-diffusion method. There was a significant difference in the percentage of slime-producing CoNS isolates on vinyl (12.4%) versus ceramic tile flooring (4.4%). From vinyl flooring, the percentage of slime producing methicillin-resistant CoNS (MRCoNS) (8.9%) was significantly higher than for methicillin-sensitive CoNS (MSCoNS) (3.6%), whereas there was no difference from ceramic tile flooring (2.5% MRCoNS versus 1.8% MSCoNS). The most commonly isolated slime-producing CoNS species was S. epidermidis on both types of flooring. It is concluded that vinyl flooring seems to be a more suitable colonization surface for slime-producing CoNS than ceramic tile floors. Further studies are needed to investigate bacterial strains colonized on flooring materials, which are potential pathogens for nosocomial infections. PMID:19589249
Xia, Hao; Wang, Xiaogang; Qiao, Yanyou; Jian, Jun; Chang, Yuanfei
2015-01-01
Following the popularity of smart phones and the development of mobile Internet, the demands for accurate indoor positioning have grown rapidly in recent years. Previous indoor positioning methods focused on plane locations on a floor and did not provide accurate floor positioning. In this paper, we propose a method that uses multiple barometers as references for the floor positioning of smart phones with built-in barometric sensors. Some related studies used barometric formula to investigate the altitude of mobile devices and compared the altitude with the height of the floors in a building to obtain the floor number. These studies assume that the accurate height of each floor is known, which is not always the case. They also did not consider the difference in the barometric-pressure pattern at different floors, which may lead to errors in the altitude computation. Our method does not require knowledge of the accurate heights of buildings and stories. It is robust and less sensitive to factors such as temperature and humidity and considers the difference in the barometric-pressure change trends at different floors. We performed a series of experiments to validate the effectiveness of this method. The results are encouraging. PMID:25835189
Total pelvic floor ultrasound for pelvic floor defaecatory dysfunction: a pictorial review.
Hainsworth, Alison J; Solanki, Deepa; Schizas, Alexis M P; Williams, Andrew B
2015-01-01
Total pelvic floor ultrasound is used for the dynamic assessment of pelvic floor dysfunction and allows multicompartmental anatomical and functional assessment. Pelvic floor dysfunction includes defaecatory, urinary and sexual dysfunction, pelvic organ prolapse and pain. It is common, increasingly recognized and associated with increasing age and multiparity. Other options for assessment include defaecation proctography and defaecation MRI. Total pelvic floor ultrasound is a cheap, safe, imaging tool, which may be performed as a first-line investigation in outpatients. It allows dynamic assessment of the entire pelvic floor, essential for treatment planning for females who often have multiple diagnoses where treatment should address all aspects of dysfunction to yield optimal results. Transvaginal scanning using a rotating single crystal probe provides sagittal views of bladder neck support anteriorly. Posterior transvaginal ultrasound may reveal rectocoele, enterocoele or intussusception whilst bearing down. The vaginal probe is also used to acquire a 360° cross-sectional image to allow anatomical visualization of the pelvic floor and provides information regarding levator plate integrity and pelvic organ alignment. Dynamic transperineal ultrasound using a conventional curved array probe provides a global view of the anterior, middle and posterior compartments and may show cystocoele, enterocoele, sigmoidocoele or rectocoele. This pictorial review provides an atlas of normal and pathological images required for global pelvic floor assessment in females presenting with defaecatory dysfunction. Total pelvic floor ultrasound may be used with complementary endoanal ultrasound to assess the sphincter complex, but this is beyond the scope of this review. PMID:26388109
23. FIFTH FLOOR BLDG. 28B, DETAIL WOOD BLOCK FLOORING LOOKING ...
23. FIFTH FLOOR BLDG. 28B, DETAIL WOOD BLOCK FLOORING LOOKING WEST. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT
24. FIFTH FLOOR BLDG. 28B, DETAIL WOOD BLOCK FLOORING LOOKING ...
24. FIFTH FLOOR BLDG. 28B, DETAIL WOOD BLOCK FLOORING LOOKING NORTH. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT
ETR ELECTRICAL BUILDING, TRA648. FLOOR PLANS FOR FIRST FLOOR AND ...
ETR ELECTRICAL BUILDING, TRA-648. FLOOR PLANS FOR FIRST FLOOR AND BASEMENT. SECTIONS. KAISER ETR-5528-MTR-648-A-2, 12/1955. INL INDEX NO. 532-0648-00-486-101402, REV. 6. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
17 CFR 1.62 - Contract market requirement for floor broker and floor trader registration.
Code of Federal Regulations, 2011 CFR
2011-04-01
... for future delivery or commodity option on or subject to the rules of that contract market. (2) Each... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Contract market requirement....62 Contract market requirement for floor broker and floor trader registration. (a)(1) Each...
17 CFR 1.62 - Contract market requirement for floor broker and floor trader registration.
Code of Federal Regulations, 2010 CFR
2010-04-01
... for future delivery or commodity option on or subject to the rules of that contract market. (2) Each... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Contract market requirement....62 Contract market requirement for floor broker and floor trader registration. (a)(1) Each...
17 CFR 1.62 - Contract market requirement for floor broker and floor trader registration.
Code of Federal Regulations, 2012 CFR
2012-04-01
... for future delivery or commodity option on or subject to the rules of that contract market. (2) Each... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Contract market requirement....62 Contract market requirement for floor broker and floor trader registration. (a)(1) Each...
ETR, TRA642. FLOOR PLAN UNDER BALCONY ON CONSOLE FLOOR. MOTORGENERATOR ...
ETR, TRA-642. FLOOR PLAN UNDER BALCONY ON CONSOLE FLOOR. MOTOR-GENERATOR SETS AND OTHER ELECTRICAL EQUIPMENT. PHILLIPS PETROLEUM COMPANY ETR-D-1781, 7/1960. INL INDEX NO. 532-0642-00-706-020384, REV. 1. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
The cleaning of ward floors and the bacteriological study of floor-cleaning machines
Bate, J. G.
1961-01-01
Current trends in ward flooring materials and cleaning methods are considered from the point of view of the hospital bacteriologist. Methods employed in an investigation into the bacteriological safety of a number of floor-cleaning machines are described, and some considerations governing the choice of vacuum cleaners for ward use are discussed. Images PMID:13687726
NASA Astrophysics Data System (ADS)
Tappin, A. D.; Burton, J. D.; Millward, G. E.; Statham, P. J.
1997-10-01
A new transport model for metals (named NOSTRADAMUS) has been developed to predict concentrations and distributions of Cd, Cu, Ni, Pb and Zn in the southern North Sea. NOSTRADAMUS is comprised of components for water, inorganic and organic suspended particulate matter transport; a primary production module contributes to the latter component. Metal exchange between dissolved (water) and total suspended particulate matter (inorganic + organic) phases is driven by distribution coefficients. Transport is based on an existent 2-D vertically integrated model, incorporating a 35 × 35 km grid. NOSTRADAMUS is largely driven by data obtained during the Natural Environment Research Council North Sea Project (NERC NSP). The sensitivity of model predictions to uncertainties in the magnitudes of metal inputs has been tested. Results are reported for a winter period (January 1989) when plankton production was low. Simulated ranges in concentrations in regions influenced by the largest inflows, i.e. the NE English coast and the Southern Bight, are similar to the ranges in the errors of the concentrations estimated at the northern and southern open sea boundaries of the model. Inclusion of uncertainties with respect to atmospheric (up to ± 54%) and riverine (± 30%) inputs makes little difference to the calculated concentrations of both dissolved and particulate fractions within the southern North Sea. When all the errors associated with the inputs are included there is good agreement between computed and observed concentrations, and that for dissolved and particulate Cd, Cu and Zn, and dissolved Ni and Pb, many of the observations fall within, or are close to, the range of values generated by the model. For particulate Pb, model simulations predict concentrations of the right order, but do not reproduce the large scatter in actual concentrations, with simulated concentrations showing a bias towards lower values compared to those observed. A factor which could have contributed
Curved Lobate Scarp on Crater Floor
NASA Technical Reports Server (NTRS)
1974-01-01
A broadly curved lobate scarp (running from left to right in the large crater to the right of center in this image) is restricted to the floor of a crater 85 kilometers in diameter. The rim of this crater and the rims of those north of it have been disrupted by the process which caused the hilly and lineated terrain. This process has not affected the smooth plains on their floors, indicating that the floor materials post date the formation of the craters. In this case, the scarp on the crater floor may be a flow front formed during emplacement of the floor material.
This image (FDS 27379) was acquired during the spacecraft's first encounter with Mercury.
The Mariner 10 mission, managed by the Jet Propulsion Laboratory for NASA's Office of Space Science, explored Venus in February 1974 on the way to three encounters with Mercury-in March and September 1974 and in March 1975. The spacecraft took more than 7,000 photos of Mercury, Venus, the Earth and the Moon.
Image Credit: NASA/JPL/Northwestern University
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Simulating the Formation of Lunar Floor-Fracture Craters Using Elastoviscoplastic Relaxation
NASA Technical Reports Server (NTRS)
Dombard, A. J.; Gillis, J. J.
1999-01-01
summation of the elastic, creep, and plastic strains. In relaxation phenomena in general, the system takes advantage of any means possible to eliminate deviatoric stresses by relaxing away the topography. Previous analyses have only modeled the viscous response. Comparatively, the elastic response in our model can augment the relaxation, to a point. This effect decreases as the elastic response becomes stiffer; indeed, in the limit of infinite elastic Young's modulus (and with no plasticity), the solution converges on the purely viscous solution. Igneous rocks common to the lunar near-surface have Young's modulii in the range of 10-100 GPa. To maximize relaxation, we use a Young's modulus of 10 GPa. (There is negligible sensitivity to the other elastic modulus, the Poisson's ratio; we use 0.25.) For the viscous response, we use a flow law for steady-state creep in thoroughly dried Columbia diabase, because the high plagioclase (about 70 vol%) and orthopyroxene (about 17 vol%) content is similar to the composition of the lunar highland crust as described by remote sensing and sample studies: noritic anorthosite. This flow law is highly non-Newtonian, i.e., the viscosity is highly stress dependent. That, and the variability with temperature, stands in strong contrast to previous examinations of lunar floor-fracture crater relaxation. To model discrete, brittle faulting, we assume "Byerlee's rule," a standard geodynamical technique. We implement this "rule" with an-angle of internal friction of about 40 deg, and a higher-than-normal cohesion of about 3.2 MPa (to approximate the breaking of unfractured rock). The actual behavior of geologic materials is more complex than in our rheological model, so the uncertainties in the plasticity do not represent the state-of-the-art error. Additional information is contained in the original.
Davis, C B
1987-08-01
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results.
The relationship between aerosol model uncertainty and radiative forcing uncertainty
NASA Astrophysics Data System (ADS)
Carslaw, Ken; Lee, Lindsay; Reddington, Carly
2016-04-01
There has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated aerosol-cloud forcing between pre-industrial and present day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the pre-industrial aerosol state. But the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are "equally acceptable" compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty, but this hides a range of very different aerosol models. These multiple so-called "equifinal" model variants predict a wide range of forcings. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.
Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.
2008-01-01
This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.
Lewandowsky, Stephan; Ballard, Timothy; Pancost, Richard D.
2015-01-01
This issue of Philosophical Transactions examines the relationship between scientific uncertainty about climate change and knowledge. Uncertainty is an inherent feature of the climate system. Considerable effort has therefore been devoted to understanding how to effectively respond to a changing, yet uncertain climate. Politicians and the public often appeal to uncertainty as an argument to delay mitigative action. We argue that the appropriate response to uncertainty is exactly the opposite: uncertainty provides an impetus to be concerned about climate change, because greater uncertainty increases the risks associated with climate change. We therefore suggest that uncertainty can be a source of actionable knowledge. We survey the papers in this issue, which address the relationship between uncertainty and knowledge from physical, economic and social perspectives. We also summarize the pervasive psychological effects of uncertainty, some of which may militate against a meaningful response to climate change, and we provide pointers to how those difficulties may be ameliorated. PMID:26460108
A&M. TAN607 floor plans. Shows three floor levels of pool, ...
A&M. TAN-607 floor plans. Shows three floor levels of pool, hot shop, and warm shop. Includes view of pool vestibule, personnel labyrinth, location of floor rails, and room numbers of office areas, labs, instrument rooms, and stairways. This drawing was re-drawn to show as-built features in 1993. Ralph M. Parsons 902-3-ANP-607-A 96. Date of original: December 1952. Approved by INEEL Classification Office for public release. INEEL index code no. 034-0607-00-693-106748 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Ozone Uncertainties Study Algorithm (OUSA)
NASA Technical Reports Server (NTRS)
Bahethi, O. P.
1982-01-01
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Ozone Uncertainties Study Algorithm (OUSA)
NASA Astrophysics Data System (ADS)
Bahethi, O. P.
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Nontraumatic orbital floor fracture after nose blowing.
Sandhu, Ranjit S; Shah, Akash D
2016-03-01
A 40-year-old woman with no history of trauma or prior surgery presented to the emergency department with headache and left eye pain after nose blowing. Noncontrast maxillofacial computed tomography examination revealed an orbital floor fracture that ultimately required surgical repair. There are nontraumatic causes of orbital blowout fractures, and imaging should be obtained irrespective of trauma history. PMID:26973725
Code of Federal Regulations, 2014 CFR
2014-10-01
... Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS PARTS AND ACCESSORIES NECESSARY FOR SAFE OPERATION... fumes, exhaust gases, or fire. Floors shall not be permeated with oil or other substances likely...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS PARTS AND ACCESSORIES NECESSARY FOR SAFE OPERATION... fumes, exhaust gases, or fire. Floors shall not be permeated with oil or other substances likely...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS PARTS AND ACCESSORIES NECESSARY FOR SAFE OPERATION... fumes, exhaust gases, or fire. Floors shall not be permeated with oil or other substances likely...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS PARTS AND ACCESSORIES NECESSARY FOR SAFE OPERATION... fumes, exhaust gases, or fire. Floors shall not be permeated with oil or other substances likely...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS PARTS AND ACCESSORIES NECESSARY FOR SAFE OPERATION... fumes, exhaust gases, or fire. Floors shall not be permeated with oil or other substances likely...
Building Trades. Block III. Floor Framing.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This document contains three units of a course on floor framing to be used as part of a building trades program. Each unit consists, first, of an informational lesson, with complete lesson plan for the teacher's use. Included in each lesson plan are the lesson aim; lists of teaching aids, materials, references, and prerequisites for students;…
Nontraumatic orbital floor fracture after nose blowing
Sandhu, Ranjit S.; Shah, Akash D.
2016-01-01
A 40-year-old woman with no history of trauma or prior surgery presented to the emergency department with headache and left eye pain after nose blowing. Noncontrast maxillofacial computed tomography examination revealed an orbital floor fracture that ultimately required surgical repair. There are nontraumatic causes of orbital blowout fractures, and imaging should be obtained irrespective of trauma history. PMID:26973725
Seeing Results in Flooring for Schools
ERIC Educational Resources Information Center
Simmons, Brian
2011-01-01
Operations staffs at education facilities of all sizes are tasked with selecting a hard floor cleaning program that is cost-effective, efficient and highly productive. With an increased focus on the sustainability of an environment, facility managers also must select a program that meets sustainability goals while maintaining a healthful, safe…
Do I have an alluvial valley floor
Beach, G.G.
1980-12-01
The Surface Mining Control and Reclamation Act of 1977 establishes specific restrictions for coal mining on or adjacent to alluvial valley floors. Alluvial valley floors are lands in the Western United States where water availability for flood irrigation or subirrigation provides enhanced agricultural productivity on stream-laid deposits located in valley bottoms. Alluvial valley floors may consist of developed land or undeveloped rangeland. Developed land, if of sufficient size to be important to a farming operation, cannot be mined whereas undeveloped rangeland can be mined provided certain performance standards are met. Developed land is important to farming when the percentage loss of production by removal of the alluvial valley floor from a farm(s) total production exceeds the equation P = 3 + 0.0014X, where P is the maximum percentage loss of productivity considered to be a negligible impact to a Wyoming farming operation and X is the number of animal units of total farm production above 100. A threshold level of 10 percent is placed on P, above which such a loss is considered to be a significant loss to any size farming operation.
Sea Floor off San Diego, California
Dartnell, Peter; Gibbons, Helen
2009-01-01
Ocean-floor image generated from multibeam-bathymetry data acquired by the U.S. Geological Survey (USGS); Woods Hole Oceanographic Institution; Scripps Institution of Oceanography; California State University, Monterey Bay; and Fugro Pelagos. To learn more, visit http://pubs.usgs.gov/sim/2007/2959/.
Geodetic measurements at sea floor spreading centers
NASA Technical Reports Server (NTRS)
Spiess, F. N.
1978-01-01
A network of 8 or more precision transponder units mounted on the sea floor and interrogated periodically from an instrument package towed near bottom through the area to provide the necessary spatial averaging could provide a practical system for observing the pattern of buildup of strain at intermediate and fast spreading centers.
Performance Support on the Shop Floor.
ERIC Educational Resources Information Center
Kasvi, Jyrki J. J.; Vartiainen, Matti
2000-01-01
Discussion of performance support on the shop floor highlights four support systems for assembly lines that incorporate personal computer workstations in local area networks and use multimedia documents. Considers new customer-focused production paradigms; organizational learning; knowledge development; and electronic performance support systems…
Lead exposures from varnished floor refinishing.
Schirmer, Joseph; Havlena, Jeff; Jacobs, David E; Dixon, Sherry; Ikens, Robert
2012-01-01
We evaluated the presence of lead in varnish and factors predicting lead exposure from floor refinishing and inexpensive dust suppression control methods. Lead in varnish, settled dust, and air were measured using XRF, laboratory analysis of scrape and wipe samples, and National Institute for Occupational Safety and Health (NIOSH) Method 7300, respectively, during refinishing (n = 35 homes). Data were analyzed using step-wise logistic regression. Compared with federal standards, no lead in varnish samples exceeded 1.0 mg/cm(2), but 52% exceeded 5000 ppm and 70% of settled dust samples after refinishing exceeded 40 μg/ft(2). Refinishing pre-1930 dwellings or stairs predicted high lead dust on floors. Laboratory analysis of lead in varnish was significantly correlated with airborne lead (r = 0.23, p = 0.014). Adding dust collection bags into drum sanders and HEPA vacuums to edgers and buffers reduced mean floor lead dust by 8293 μg Pb/ft(2) (p<0.05) on floors and reduced most airborne lead exposures to less than 50 μg/m(3). Refinishing varnished surfaces in older housing produces high but controllable lead exposures. PMID:22494405
[Nursing care in the initial phases of pelvic floor prolapse].
Hernández-González, Ana Maria
2008-01-01
Uterine prolapse consists of a falling or sliding of the uterus from its normal position in the pelvic cavity inside the vagina and is one of the most frequent alterations secondary to pelvic floor dysfunction in gynecology consultations. Although patients are reluctant to talk about this sensitive issue, they complain of feeling a lump in their genitals, urinary incontinence, and problems in their sexual relations. In fact, uterine prolapse is not a disease but an alteration of the elements suspending and containing the uterus, which are almost always injured by pregnancy and childbirth. Other causes in addition to trauma of the endopelvic fascia (mainly cardinal and uterosacral ligaments) are injuries or relaxations of the pelvic floor (the muscles lifting the anus and the fascia that covers the bladder, vagina and rectum). Causes of uterine prolapse without obstetric antecedents are usually those that involve an increase in abdominal pressure and respiratory diseases causing severe coughing. The incidence of uterine prolapse is highest in multiparous women, with prolonged deliveries, a long second stage involving marked straining, in forceps deliveries and in women with perineal tears. Nursing care is essential, both in the prevention and the detection of prolapse, so that women can express their needs without fear and are aware of the need for appropriate treatment in the incipient stages of prolapse. PMID:19080886
NASA Astrophysics Data System (ADS)
Zhou, Xuhong; Cao, Liang; Chen, Y. Frank; Liu, Jiepeng; Li, Jiang
2016-01-01
The developed pre-stressed cable reinforced concrete truss (PCT) floor system is a relatively new floor structure, which can be applied to various long-span structures such as buildings, stadiums, and bridges. Due to the lighter mass and longer span, floor vibration would be a serviceability concern problem for such systems. In this paper, field testing and theoretical analysis for the PCT floor system were conducted. Specifically, heel-drop impact and walking tests were performed on the PCT floor system to capture the dynamic properties including natural frequencies, mode shapes, damping ratios, and acceleration response. The PCT floor system was found to be a low frequency (<10 Hz) and low damping (damping ratio<2 percent) structural system. The comparison of the experimental results with the AISC's limiting values indicates that the investigated PCT system exhibits satisfactory vibration perceptibility, however. The analytical solution obtained from the weighted residual method agrees well with the experimental results and thus validates the proposed analytical expression. Sensitivity studies using the analytical solution were also conducted to investigate the vibration performance of the PCT floor system.
FLOOR PLAN OF MAIN PROCESSING BUILDING (CPP601), FIRST FLOOR SHOWING ...
FLOOR PLAN OF MAIN PROCESSING BUILDING (CPP-601), FIRST FLOOR SHOWING SAMPLE CORRIDORS AND EIGHTEEN CELLS AND ADJOINING REMOTE ANALYTICAL FACILITY (CPP-627) SHOWING REMOTE ANALYTICAL FACILITIES LAB, DECONTAMINATION ROOM, AND MULTICURIE CELL ROOM. TO LEFT ARE LABORATORY BUILDING (CPP-602) AND MAINTENANCE BUILDING (CPP-630). INL DRAWING NUMBER 200-0601-00-706-051979. ALTERNATE ID NUMBER CPP-E-1979. - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID
FLOOR PLAN OF MAIN PROCESSING BUILDING (CPP601), SECOND FLOOR SHOWING ...
FLOOR PLAN OF MAIN PROCESSING BUILDING (CPP-601), SECOND FLOOR SHOWING PROCESS MAKEUP AREA AND EIGHTEEN CELLS AND ADJOINING REMOTE ANALYTICAL FACILITY (CPP-627) SHOWING COLD LAB, DECONTAMINATION ROOM, MULTICURIE CELL ROOM, AND OFFICES. TO LEFT ARE LABORATORY BUILDING (CPP-602) AND MAINTENANCE BUILDING (CPP-630). INL DRAWING NUMBER 200-0601-00-706-051980. ALTERNATE ID NUMBER CPP-E-1980. - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID
8. DETAIL: GENERATOR FLOOR DIABLO POWERHOUSE SHOWING BUTTERFLY VALVE CONTROL, ...
8. DETAIL: GENERATOR FLOOR DIABLO POWERHOUSE SHOWING BUTTERFLY VALVE CONTROL, MOSAIC TILE FLOOR, AS SEEN FROM VISITORS GALLERY, 1989. - Skagit Power Development, Diablo Powerhouse, On Skagit River, 6.1 miles upstream from Newhalem, Newhalem, Whatcom County, WA
CAR MACHINE SHOP, SECOND FLOOR, PAINT SPRAY ROOM EXTERIOR AND ...
CAR MACHINE SHOP, SECOND FLOOR, PAINT SPRAY ROOM EXTERIOR AND ATTIC FLOOR SUPPORT COLUMNS AND BEAMS, LOOKING WEST. - Southern Pacific, Sacramento Shops, Car Machine Shop, 111 I Street, Sacramento, Sacramento County, CA
50. Ground floor, looking northwest at former location of ground ...
50. Ground floor, looking northwest at former location of ground floor (bottom) level of milk room - Sheffield Farms Milk Plant, 1075 Webster Avenue (southwest corner of 166th Street), Bronx, Bronx County, NY
3. MILK BARN, INTERIOR VIEW OF GROUND FLOOR, LOOKING 132 ...
3. MILK BARN, INTERIOR VIEW OF GROUND FLOOR, LOOKING 132 DEGREES SOUTHEAST, SHOWING RAISED FLOOR OF CENTRAL AISLE. - Hudson-Cippa-Wolf Ranch, Milk Barn, Sorento Road, Sacramento, Sacramento County, CA
20. View of second floor to the Cherry Hill lettuce ...
20. View of second floor to the Cherry Hill lettuce shed looking at floor area - Richmond Hill Plantation, Cherry Hill Lettuce Shed, East of Richmond Hill on Ford Neck Road, Richmond Hill, Bryan County, GA
27 CFR 46.233 - Payment of floor stocks tax.
Code of Federal Regulations, 2010 CFR
2010-04-01
... PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Filing Requirements § 46.233 Payment of floor stocks tax....
27 CFR 46.233 - Payment of floor stocks tax.
Code of Federal Regulations, 2011 CFR
2011-04-01
... PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Filing Requirements § 46.233 Payment of floor stocks tax....
27 CFR 46.231 - Floor stocks tax return.
Code of Federal Regulations, 2014 CFR
2014-04-01
... CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette....28T09, 2009 Floor Stocks Tax Return—Tobacco Products and Cigarette Papers and Tubes, is available...
27 CFR 46.231 - Floor stocks tax return.
Code of Federal Regulations, 2010 CFR
2010-04-01
... CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette....28T09, 2009 Floor Stocks Tax Return—Tobacco Products and Cigarette Papers and Tubes, is available...
27 CFR 46.233 - Payment of floor stocks tax.
Code of Federal Regulations, 2014 CFR
2014-04-01
... PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Filing Requirements § 46.233 Payment of floor stocks tax....
27 CFR 46.231 - Floor stocks tax return.
Code of Federal Regulations, 2013 CFR
2013-04-01
... CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette....28T09, 2009 Floor Stocks Tax Return—Tobacco Products and Cigarette Papers and Tubes, is available...
27 CFR 46.233 - Payment of floor stocks tax.
Code of Federal Regulations, 2013 CFR
2013-04-01
... PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Filing Requirements § 46.233 Payment of floor stocks tax....
27 CFR 46.231 - Floor stocks tax return.
Code of Federal Regulations, 2011 CFR
2011-04-01
... CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette....28T09, 2009 Floor Stocks Tax Return—Tobacco Products and Cigarette Papers and Tubes, is available...
27 CFR 46.231 - Floor stocks tax return.
Code of Federal Regulations, 2012 CFR
2012-04-01
... CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette....28T09, 2009 Floor Stocks Tax Return—Tobacco Products and Cigarette Papers and Tubes, is available...
Refrigeration Plant, North Elevation, Second Floor Plan, East Elevation, Ground ...
Refrigeration Plant, North Elevation, Second Floor Plan, East Elevation, Ground Floor Plan, Section A-A - Kennecott Copper Corporation, On Copper River & Northwestern Railroad, Kennicott, Valdez-Cordova Census Area, AK
27. INTERIOR, FIRST FLOOR, SOUTH ENTRANCE, SOUTH LOBBY, DETAIL OF ...
27. INTERIOR, FIRST FLOOR, SOUTH ENTRANCE, SOUTH LOBBY, DETAIL OF BRONZE SEAL IN FLOOR (4' x 5' negative; 8' x 10' print) - U.S. Department of the Interior, Eighteenth & C Streets Northwest, Washington, District of Columbia, DC
INTERIOR VIEW OF THE SECOND FLOOR STAIR HALL. NOTE THE ...
INTERIOR VIEW OF THE SECOND FLOOR STAIR HALL. NOTE THE TONGUE-AND-GROOVE WOOD FLOORING AND THE WINDOW ABOVE THE STAIR LANDING. VIEW FACING SOUTH. - Hickam Field, Officers' Housing Type D, 111 Beard Avenue, Honolulu, Honolulu County, HI
79. DETAIL, MOSAIC FLOOR IN HALL 355 AT ENTRANCE TO ...
79. DETAIL, MOSAIC FLOOR IN HALL 355 AT ENTRANCE TO REGENTS' ROOM, THIRD FLOOR - Smithsonian Institution Building, 1000 Jefferson Drive, between Ninth & Twelfth Streets, Southwest, Washington, District of Columbia, DC
3. FIRST FLOOR, FRONT SOUTHWEST CORNER ROOM WITH STAIRWAY TO ...
3. FIRST FLOOR, FRONT SOUTHWEST CORNER ROOM WITH STAIRWAY TO SECOND FLOOR - Penn School Historic District, Benezet House, 1 mile South of Frogmore, Route 37, St Helena Island, Frogmore, Beaufort County, SC
5. EAST SECTION OF BUILDING, FIRST FLOOR, WEST ROOM. NOTE ...
5. EAST SECTION OF BUILDING, FIRST FLOOR, WEST ROOM. NOTE OVEN AT LEFT. All construction original except wood flooring, plumbing and electricity. - Ralph Izard House, Kitchen Building, 110 Broad Street, Charleston, Charleston County, SC
View of double floor boards with mortises cross beams, showing ...
View of double floor boards with mortises cross beams, showing spikes and flooring nails (Lower board layer exposed) - Silas C. Read Sawmill, Outlet of Maxwell Lake near North Range Road, Fort Gordon, Richmond County, GA
32. Coffee bean sluiceway on ground floor showing chute bringing ...
32. Coffee bean sluiceway on ground floor showing chute bringing beans from first floor hopper. HAER PR, 6-MAGU, 1B-17 - Hacienda Buena Vista, PR Route 10 (Ponce to Arecibo), Magueyes, Ponce Municipio, PR
18. 1925 Main Factory building, interior, second floor, view looking ...
18. 1925 Main Factory building, interior, second floor, view looking northeast at opening in the floor for dropping warp rolls - North Star Woolen Mill, 109 Portland Avenue South, Minneapolis, Hennepin County, MN
12. TRIPLE WINDOW, FIRST FLOOR, SOUTH SIDE. Typical for all ...
12. TRIPLE WINDOW, FIRST FLOOR, SOUTH SIDE. Typical for all triple windows on first and second floors. Note single swing jib door - John Joyner Smith House, 400 Wilmington Street, Beaufort, Beaufort County, SC
15. SECOND FLOOR, SOUTHWEST ROOM (HALL CHAMBER), SOUTH WALL WITH ...
15. SECOND FLOOR, SOUTHWEST ROOM (HALL CHAMBER), SOUTH WALL WITH STAIRCASE TO ATTIC AND STAIRWELL FROM FIRST FLOOR - John Richardson House, 15 Race Street, Richardson Park, Wilmington, New Castle County, DE
33. Third floor, looking north, elevator and central stair to ...
33. Third floor, looking north, elevator and central stair to the right (original ice manufacturing floor) - Sheffield Farms Milk Plant, 1075 Webster Avenue (southwest corner of 166th Street), Bronx, Bronx County, NY
11. BUILDING 1: FIRST FLOOR (Center Section), WEST AND NORTH ...
11. BUILDING 1: FIRST FLOOR (Center Section), WEST AND NORTH WALLS, SHOWING TWO TIERS OF COLUMNS WITH SECOND FLOOR REMOVED - Boston Beer Company, 225-249 West Second Street, South Boston, Suffolk County, MA
5. Light tower, stairs to second floor, looking northeast from ...
5. Light tower, stairs to second floor, looking northeast from first floor - Little River Light Station, East end of Little River Island, at mouth of Little River & entrance to Cutler Harbor, Cutler, Washington County, ME
Detail of first floor of loading dock showing composition tile ...
Detail of first floor of loading dock showing composition tile over wood floor/basement ceiling - Southern Pacific Railroad Depot, Railroad Terminal Post Office & Express Building, Fifth & I Streets, Sacramento, Sacramento County, CA
8. DETAIL OF EAST FRONT, SHOWING FIRST FLOOR STOREFRONT AND ...
8. DETAIL OF EAST FRONT, SHOWING FIRST FLOOR STOREFRONT AND SECOND FLOOR WINDOWS. VIEW TO WEST. - Commercial & Industrial Buildings, Dubuque Seed Company Warehouse, 169-171 Iowa Street, Dubuque, Dubuque County, IA
Universal Uncertainty Relations
NASA Astrophysics Data System (ADS)
Gour, Gilad
2014-03-01
Uncertainty relations are a distinctive characteristic of quantum theory that imposes intrinsic limitations on the precision with which physical properties can be simultaneously determined. The modern work on uncertainty relations employs entropic measures to quantify the lack of knowledge associated with measuring non-commuting observables. However, I will show here that there is no fundamental reason for using entropies as quantifiers; in fact, any functional relation that characterizes the uncertainty of the measurement outcomes can be used to define an uncertainty relation. Starting from a simple assumption that any measure of uncertainty is non-decreasing under mere relabeling of the measurement outcomes, I will show that Schur-concave functions are the most general uncertainty quantifiers. I will then introduce a novel fine-grained uncertainty relation written in terms of a majorization relation, which generates an infinite family of distinct scalar uncertainty relations via the application of arbitrary measures of uncertainty. This infinite family of uncertainty relations includes all the known entropic uncertainty relations, but is not limited to them. In this sense, the relation is universally valid and captures the essence of the uncertainty principle in quantum theory. This talk is based on a joint work with Shmuel Friedland and Vlad Gheorghiu. This research is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada and by the Pacific Institute for Mathematical Sciences (PIMS).
Fission Spectrum Related Uncertainties
G. Aliberti; I. Kodeli; G. Palmiotti; M. Salvatores
2007-10-01
The paper presents a preliminary uncertainty analysis related to potential uncertainties on the fission spectrum data. Consistent results are shown for a reference fast reactor design configuration and for experimental thermal configurations. However the results obtained indicate the need for further analysis, in particular in terms of fission spectrum uncertainty data assessment.
An anomalous case of an indirect orbital floor fracture.
Nicolotti, Matteo; Poglio, Giuseppe; Grivetto, Fabrizio; Benech, Arnaldo
2014-09-01
Fractures of the orbital floor are common in facial trauma. Those that comprise only the orbital floor are called indirect fractures or pure internal orbital floor fractures. We present the case of an indirect fracture of the orbital floor after direct trauma to the back of the head caused by a bicycle accident. To the best of our knowledge this is the first time that this mechanism for such a fracture has been reported. PMID:24742591
75 FR 70061 - Dealer Floor Plan Pilot Program Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
... ADMINISTRATION Dealer Floor Plan Pilot Program Meeting AGENCY: U.S. Small Business Administration (SBA). ACTION... agenda for a meeting regarding the Dealer Floor Plan Pilot Program established in the Small Business Jobs Act of 2010. The meeting will be open to the public. DATES: The Dealer Floor Plan Pilot...
27 CFR 46.195 - Floor stocks requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 2 2010-04-01 2010-04-01 false Floor stocks requirements... CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 General § 46.195 Floor stocks requirements. (a) Take inventory....
27 CFR 46.195 - Floor stocks requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 27 Alcohol, Tobacco Products and Firearms 2 2011-04-01 2011-04-01 false Floor stocks requirements... CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 General § 46.195 Floor stocks requirements. (a) Take inventory....
36 CFR 1192.79 - Floors, steps and thresholds.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Light Rail Vehicles and Systems § 1192.79 Floors, steps and thresholds. (a) Floor surfaces on aisles... accommodated shall be slip-resistant. (b) All thresholds and step edges shall have a band of color(s) running... floor, either light-on-dark or dark-on-light....
36 CFR 1192.79 - Floors, steps and thresholds.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Light Rail Vehicles and Systems § 1192.79 Floors, steps and thresholds. (a) Floor surfaces on aisles... accommodated shall be slip-resistant. (b) All thresholds and step edges shall have a band of color(s) running... floor, either light-on-dark or dark-on-light....
36 CFR 1192.79 - Floors, steps and thresholds.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Light Rail Vehicles and Systems § 1192.79 Floors, steps and thresholds. (a) Floor surfaces on aisles... accommodated shall be slip-resistant. (b) All thresholds and step edges shall have a band of color(s) running... floor, either light-on-dark or dark-on-light....
What Do You Really Know About Floor Finishes & Strippers?
ERIC Educational Resources Information Center
Wirth, T. J.
1972-01-01
An independent testing laboratory reveals the results of comparative studies done on vinyl flooring and the question of to wax or not to wax'' and which waxes work best with what flooring; and provides six evaluation tips on floor strippers. (EA)
13. INTERIOR VIEW, FIRST FLOOR SHOWING THE ELEVATORS FEEDING GRAIN ...
13. INTERIOR VIEW, FIRST FLOOR SHOWING THE ELEVATORS FEEDING GRAIN FROM THE SECOND FLOOR TO THE GRINDING STONES, WITH GRAIN ELEVATORS IN BACKGROUND (NOTE OUTLINE ON THE FLOOR WHERE ROLLER MILLS WERE ORIGINALLY PLACED) - Schech's Mill, Beaver Creek State Park, La Crescent, Houston County, MN
17. Same floor as hot water vats looking towards the ...
17. Same floor as hot water vats looking towards the front of the building. These have to do with grain from upper floor judging from ceiling to floor progression. Note nice iron work. - Tivoli-Union Brewery, 1320-1348 Tenth Street, Denver, Denver County, CO
Jawitz, James W.; Munoz-Carpena, Rafael; Muller, Stuart; Grace, Kevin A.; James, Andrew I.
2008-01-01
in the phosphorus cycling mechanisms were simulated in these case studies using different combinations of phosphorus reaction equations. Changes in water column phosphorus concentrations observed under the controlled conditions of laboratory incubations, and mesocosm studies were reproduced with model simulations. Short-term phosphorus flux rates and changes in phosphorus storages were within the range of values reported in the literature, whereas unknown rate constants were used to calibrate the model output. In STA-1W Cell 4, the dominant mechanism for phosphorus flow and transport is overland flow. Over many life cycles of the biological components, however, soils accrue and become enriched in phosphorus. Inflow total phosphorus concentrations and flow rates for the period between 1995 and 2000 were used to simulate Cell 4 phosphorus removal, outflow concentrations, and soil phosphorus enrichment over time. This full-scale application of the model successfully incorporated parameter values derived from the literature and short-term experiments, and reproduced the observed long-term outflow phosphorus concentrations and increased soil phosphorus storage within the system. A global sensitivity and uncertainty analysis of the model was performed using modern techniques such as a qualitative screening tool (Morris method) and the quantitative, variance-based, Fourier Amplitude Sensitivity Test (FAST) method. These techniques allowed an in-depth exploration of the effect of model complexity and flow velocity on model outputs. Three increasingly complex levels of possible application to southern Florida were studied corresponding to a simple soil pore-water and surface-water system (level 1), the addition of plankton (level 2), and of macrophytes (level 3). In the analysis for each complexity level, three surface-water velocities were considered that each correspond to residence times for the selected area (1-kilometer long) of 2, 10, and 20